Introducing KQL-ADX-Expert: A GitHub Copilot Skill for Kusto Query Language
A new GitHub Copilot CLI skill that brings expert-level KQL guidance, live query execution, and ADX cluster schema discovery right into your terminal.
If you’ve ever stared at a Log Analytics prompt trying to remember whether it’s has or contains (and which one actually uses the term index), this skill is for you. KQL-ADX-Expert is a GitHub Copilot CLI skill that puts a Kusto Query Language expert in your terminal — one that can write queries, optimize them, and even run them against a live Azure Data Explorer cluster.
The Problem: KQL Is Powerful but Hard to Remember
Kusto Query Language is the backbone of querying across Azure Data Explorer, Azure Monitor Log Analytics, Microsoft Sentinel, and Microsoft Defender. It’s incredibly expressive — pipe-based, columnar, with a rich function library — but it has a steep learning curve.
Common pain points that motivated this project:
- Which table holds the data I need? (
Heartbeat?Perf?AzureDiagnostics? A resource-specific table?) - Getting column names wrong — especially when schemas vary across clusters
- Performance pitfalls — writing queries that scan terabytes when a
wherefilter could have pruned early - Join syntax — nine join flavors, each with different semantics for unmatched rows
Rather than keep a dozen browser tabs open to Microsoft Learn, I wanted that knowledge embedded in the tool I’m already using.
What KQL-ADX-Expert Does
The skill has three layers that work together:
flowchart LR
A[KQL question] --> B[Skill activates in GH Copilot CLI]
B --> C{Need live data?}
C -- No --> D[KQL guidance + query generation]
C -- Yes --> E[Spider cluster for schema]
E --> F[Schema-aware query building]
F --> G[Execute query]
G --> H[Formatted results in terminal]
1. Expert Query Guidance
The skill bundles a comprehensive KQL reference — operators, scalar functions, aggregation functions, join flavors, and performance rules — directly in its prompt context. When you ask a question, it doesn’t just guess; it follows a structured process:
- Check for existing cluster schema
- Identify the right table
- Filter early with
where - Project only needed columns
- Aggregate, sort, and validate
2. Live Cluster Interaction
This is where it gets interesting. The skill includes a Python CLI tool (adx_tool.py) with two subcommands:
Spider — Discovers every database, table, and column in an ADX cluster and saves the schema as JSON:
1
2
3
python adx_tool.py spider \
--cluster https://mycluster.region.kusto.windows.net \
--output cluster_schema.json
The output looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
"cluster": "https://mycluster.region.kusto.windows.net",
"timestamp": "2026-03-20T17:54:00Z",
"databases": [
{
"name": "MyDatabase",
"tables": [
{
"name": "StormEvents",
"columns": [
{ "name": "StartTime", "type": "datetime" },
{ "name": "State", "type": "string" },
{ "name": "DamageProperty", "type": "long" }
]
}
]
}
]
}
Query — Executes KQL against a live cluster and returns formatted results:
1
2
3
4
python adx_tool.py query \
--cluster https://mycluster.region.kusto.windows.net \
--database MyDB \
--query "StormEvents | summarize count() by State | top 5 by count_"
The tool authenticates via interactive browser login using Entra ID. Tokens are cached for the session, so you only log in once.
3. Bundled Reference Documentation
The skill ships with two detailed reference files that Copilot can draw on:
operators.md— Every tabular operator, all nine join flavors with a comparison table, scalar and aggregation functions, ADX column types, table policies, materialized views, and management commandspatterns.md— Ten fully annotated real-world query examples covering time-series aggregation, JSON parsing, gap detection, anomaly detection withmake-series,materialize()for multi-pass analysis, and more
These aren’t just cheat sheets — they’re structured so the skill can reference specific patterns when building queries for you.
The Schema-Aware Difference
Most KQL assistants generate queries based on common table names and hope for the best. KQL-ADX-Expert takes a different approach: spider first, query second.
When you point it at a cluster, the spider walks every database and table, recording exact column names and data types. From that point on, every query it writes uses your actual schema — no more Column 'X' not found errors, no guessing at data types in where clauses.
1
2
3
4
5
6
7
8
9
10
> Spider my cluster at https://analytics.eastus.kusto.windows.net
Spidering cluster: https://analytics.eastus.kusto.windows.net
Discovered:
Databases: 3
Tables: 47
Columns: 612
Schema saved to: cluster_schema.json
If you encounter column or table errors after schema changes, just re-run the spider to refresh. The skill will pick up the updated JSON automatically.
Example Prompts to Try
Once the skill is installed, these prompts activate it automatically:
| What you ask | What happens |
|---|---|
| “Write a KQL query to find VMs with CPU above 90%” | Generates an optimized Perf table query with proper filters |
“Explore my ADX cluster at https://...” | Runs the spider and saves schema JSON |
| “Optimize this query: [paste KQL]” | Applies filter-first rules, projection, join hints |
“What’s the difference between has and contains?” | Explains that has uses the term index (faster) while contains does substring scan |
| “Detect missing heartbeats from my VMs” | Builds a gap-detection query using max(TimeGenerated) and datetime_diff |
| “Parse JSON from AzureDiagnostics” | Uses parse_json() with proper type casting |
| “Run this query against my cluster: …” | Executes live and returns formatted table results |
Getting Started
Install the Skill
1
git clone https://github.com/coreycallaway/KQL-ADX-Expert.git
Copy or symlink the kql-adx-expert/ directory into your Copilot CLI skills directory. The skill activates automatically on KQL-related keywords.
Set Up Live Query Execution (Optional)
If you want the spider and query runner, install the Python dependencies:
1
pip install -r kql-adx-expert/requirements.txt
Requires Python 3.9+ and Entra ID credentials with access to your target ADX cluster.
Skill Contents
1
2
3
4
5
6
7
8
kql-adx-expert/
├── SKILL.md # Skill definition — triggers, instructions, KQL reference
├── adx_tool.py # Python CLI tool — query runner and cluster spider
├── requirements.txt # Python dependencies
├── README.md # Detailed tool documentation and setup
└── references/
├── operators.md # Full KQL operator, function, and ADX concept reference
└── patterns.md # 10+ annotated real-world query examples
What’s Next
This is the initial release. A few things on the roadmap:
- Service principal authentication — for non-interactive environments and CI pipelines
- Query history — track previously executed queries for reuse
- Additional reference patterns — more Sentinel hunting queries, Defender tables, and resource-specific table examples
If you work with KQL regularly and use GitHub Copilot CLI, give it a try and open an issue if something doesn’t work as expected or you have ideas for new patterns.
Wrapping Up
KQL-ADX-Expert brings schema-aware KQL expertise directly into your terminal through GitHub Copilot CLI. Instead of context-switching to docs or the Azure portal query editor, you can write, optimize, and execute queries without leaving your workflow. The spider-first approach means the queries are built on your actual cluster schema — not guesses — which eliminates an entire class of frustrating errors.
The project is open source under the MIT license. Contributions, bug reports, and feature requests are all welcome.