AI agent skills: how agents self-configure with SKILL.md files

Mateusz Sroka

13 Mar 2026 (about 1 month ago)

7 min read

Share:

Learn how AI agent skills work, why they outperform raw documentation by 16.2 percentage points, and see real SKILL.md examples from DexPaprika's crypto API.

AI agent skills: how agents self-configure with SKILL.md files

AI agent skills: how agents self-configure with SKILL.md files

AI agent skills are bundles of procedural knowledge, stored as markdown files, that teach agents how to complete specific tasks without trial and error. Unlike tools that execute functions or prompts that set general behavior, skills encode domain expertise: workflows, pitfalls, naming quirks, and step-by-step procedures an agent loads on demand.

By early 2026, Claude Code, GitHub Copilot, OpenAI Codex, and Cursor all converged on the same SKILL.md format within three months of each other. The speed of that convergence tells you something about how badly the ecosystem needed this pattern.

Think of skills as the difference between handing someone a hammer (a tool) and handing them a manual for framing a house (a skill). The hammer does one thing when you swing it. The manual tells you when to use the hammer, when to use the level instead, and which mistakes will bring the wall down. An AI agent with tools can call APIs. An agent with skills knows which API to call first, what parameter format to expect, and which undocumented quirk will cause a silent failure.

The distinction matters because agents fail most often from process errors, not execution errors. SkillsBench tested 86 tasks across 11 domains and found that curated skills improved agent success rates by 16.2 percentage points on average. In healthcare tasks, the improvement hit 51.9 points. The agent wasn't getting better tools. It was getting better instructions.

ConceptWhat it isWhen it loadsExample
System promptAlways-on contextEvery conversation"You are a helpful assistant"
Tool / MCPExecutable functionWhen called at runtimegetTokenDetails()
SkillProcedural knowledgeOn demand, when relevantskill.md with workflow + gotchas
PluginDistribution packageAt install timeClaude Code marketplace plugin

How AI agent skills work

Skills work in a three-step loop: discover, load, execute.

Discovery. At startup, an agent scans known locations for skill files. Claude Code checks ~/.claude/skills/ and .claude/skills/. Cursor checks .cursor/rules/. GitHub Copilot checks for SKILL.md folders. The agent reads only the metadata (name and description), not the full content. This is what makes skills scale: a project with 50 skills costs no more context than one with 5, because the agent only loads what it needs.

Loading. When a task matches a skill's description, the agent loads the full SKILL.md into its context window. The file contains markdown instructions, the same kind of text you'd write in a README, but structured for an agent to follow. No special runtime, no server, no authentication. Just text.

Then comes execution. The agent follows the skill's instructions while using its available tools to complete the task. A skill might say "call the search endpoint first, then filter results client-side because fuzzy matching returns false positives." The agent reads that instruction and acts on it using its MCP tools or HTTP capabilities.

The lazy loading pattern here is the key insight, and I think it's what most people miss about skills vs MCP. MCP is eager: all tool metadata loads upfront regardless of whether the agent uses it. Skills are lazy: they load only when relevant. As Anthropic's docs put it, "MCP connections give Claude access to tools. Skills teach Claude how to use those tools effectively."

Here's what a typical SKILL.md looks like:

---
name: dexpaprika-rest
description: "Query DexPaprika REST API for token prices, pool data, and OHLCV"
---
# DexPaprika REST API

Base URL: https://api.dexpaprika.com
Authentication: None required

## Key endpoints
- GET /networks/{network}/tokens/{address} — token details
- GET /networks/{network}/pools — pool rankings

## Important: field naming quirk
URL paths use `network` and `token_address`, but JSON responses
use `chain` and `id` for the same values.

That last line about field naming prevents hundreds of failed API calls. Without it, the agent maps response fields to request parameters incorrectly and fails silently.

Why skills beat documentation

You might wonder why an agent can't just read your API docs. It can. But docs are written for humans who browse, jump between pages, and fill in gaps from experience. An agent reads linearly within a context window. That mismatch is where skills come in.

The arXiv survey on agent skills identified three reasons skills outperform raw documentation:

Focused context. DexPaprika's full docs run ~2,847 lines (that's the llms-full.txt size). The REST skill file is ~40 lines. An agent using the skill file gets exactly what it needs for 80% of tasks without burning context on endpoints it won't touch.

Procedural framing. Documentation describes what endpoints do. Skills describe how to use them together. "First search for the token, then get its pools, then check OHLCV" is a workflow. Three separate endpoint descriptions aren't.

Gotcha prevention. This is where I've seen skills deliver the most value. The content that prevents failures isn't "here's what the endpoint does" but "here's what will silently break." DexPaprika's streaming skill documents that HTTP/2 causes silent hangs with their SSE endpoint. I spent a solid chunk of time debugging exactly that before I found the flag. One line in a skill file (--http1.1) would have saved it.

One counterintuitive finding from SkillsBench worth calling out: agents can't write their own skills effectively. Self-generated skills showed negligible or negative improvement across 7,308 test trajectories. The knowledge that helps agents most comes from humans who've actually hit the edge cases. Publishing skill files for your API isn't a nice-to-have anymore. If you don't write them, nobody will.

Skills in practice: how DexPaprika teaches agents

DexPaprika publishes two skill files that any AI agent can fetch and use to self-configure.

The REST API skill gives agents the minimum viable context to start querying: the base URL (api.dexpaprika.com), key endpoints, OHLCV parameters (including that limit maxes out at 366), and that critical field naming quirk. It also warns that the /search endpoint uses fuzzy matching, so searching "UNI" returns not just Uniswap but also "United Stables" and "Unit Bitcoin." Agents know to filter client-side.

The streaming skill is more complete. It includes working Python and bash code an agent can copy verbatim, pre-validated token addresses for immediate testing, and three documented gotchas:

GotchaWhat happensFix
HTTP/2 with SSEcurl silently hangs, no errorAdd --http1.1 flag
One invalid asset in batchEntire stream cancelsValidate via /search first
Price field p is a stringFloat parsing loses precisionParse as decimal

An agent reads one of these files and starts querying DexPaprika's API immediately:

# Any AI agent can self-configure by fetching the skill file:
curl https://dexpaprika.com/agents/skill.md

The agents.dexpaprika.com hub routes agents to whichever skill file matches their task. For runtime tool execution, the MCP server at mcp.dexpaprika.com provides 14 typed tools. For centralized exchange data, CoinPaprika's MCP server offers 23 free tools. Skills tell agents how to use these tools well. Tools execute. Skills teach.

Frequently asked questions

Q: Are skills the same as system prompts?

A: No. System prompts load into every conversation regardless of the task. Skills load only when relevant, which is why they scale better. A project can have 50 skills without consuming any extra context until one is actually needed.

Q: Can an AI agent write its own skills?

A: Not well. SkillsBench tested this across 7,308 trajectories and found self-generated skills provided negligible or negative improvement. Agents can't reliably author the procedural knowledge they benefit from consuming. Humans who've debugged the edge cases need to write them.

Q: Do I need skills if I already have MCP tools?

A: Yes. MCP tells the agent what functions are available. Skills tell it how to combine them effectively, what order to call them in, and what will break. They're complementary layers, not competing ones.

Q: How are skills different from Cursor rules?

A: Cursor rules (.cursor/rules/) are always-on context, closer to system prompts. Skills are demand-loaded based on task relevance. Cursor supports both patterns. The old .cursorrules file in the project root is deprecated in favor of the rules directory.

Q: Which AI platforms support skills?

A: As of early 2026: Claude Code, GitHub Copilot (since December 2025), OpenAI Codex (experimental, December 2025), Cursor, and 20+ agents via the agentskills.io specification. All converged on near-identical SKILL.md format within three months.

Q: How do skills work with crypto data APIs?

A: DexPaprika, a free crypto data API covering 25 million+ tokens across 33 blockchains, publishes skill files at dexpaprika.com/agents/skill.md and dexpaprika.com/agents/streaming/skill.md. Agents fetch the file, self-configure, and start querying. No API key, no registration.

What to remember about AI agent skills

Key takeaways

  • Skills are the cheapest high-impact investment in agent adoption. A 40-line markdown file improved agent success by 16.2 percentage points in SkillsBench. No other agent optimization gets close to that ratio of effort to impact.
  • Don't wait for agents to figure out your API. They can't write their own skills, and that finding from 7,308 test trajectories isn't going to change soon. If you want agents using your API, you have to write the skill file.
  • The platform convergence is real. Claude Code, Copilot, Codex, and Cursor all adopted SKILL.md in late 2025. If you write one skill file, it works across all four.
  • For how skills fit into the broader AI-readable API stack, see our guide on making your API readable by AI agents. For the runtime protocol that skills complement, see what is MCP.

Related articles

Latest articles

Coinpaprika education

Discover practical guides, definitions, and deep dives to grow your crypto knowledge.

Cryptocurrencies are highly volatile and involve significant risk. You may lose part or all of your investment.

All information on Coinpaprika is provided for informational purposes only and does not constitute financial or investment advice. Always conduct your own research (DYOR) and consult a qualified financial advisor before making investment decisions.

Coinpaprika is not liable for any losses resulting from the use of this information.

Go back to Education