PaperWeave
Graph-grounded benchmarking for scientific paper reasoning
Compare LLM-only, Basic RAG, and TigerGraph GraphRAG on the same question. Evaluation metrics compute live on each generated answer.
Pipelines
3
Domain
NLP / LLM papers
Core metric
Token reduction
Backend
TigerGraph GraphRAG
LLM-only Baseline
idleRun a query to render this pipeline's answer, token usage, timing breakdown, and evidence.
Tokens
-
Latency
-
Cost
-
Sources
-
BERTSim
-
Judge
-
Hallucination
-
Retrieval quality
-
Basic RAG
idleRun a query to render this pipeline's answer, token usage, timing breakdown, and evidence.
Tokens
-
Latency
-
Cost
-
Sources
-
BERTSim
-
Judge
-
Hallucination
-
Retrieval quality
-
TigerGraph GraphRAG
idleRun a query to render this pipeline's answer, token usage, timing breakdown, and evidence.
Tokens
-
Latency
-
Cost
-
Sources
-
BERTSim
-
Judge
-
Hallucination
-
Retrieval quality
-
Live Comparison: Tokens, Latency, Accuracy, Hallucination
Evidence Graph Preview
Graph-connected evidence will appear here after a GraphRAG run.
Basic RAG Evidence
No sources returned yet.
GraphRAG Evidence
No sources returned yet.