Methodology
How we score, why our rankings can be trusted.
Every tool on cs2apps.com is hand-tested by a real CS2 trader against the criteria below. No paid placements, no scraped reviews, no algorithmic shortcuts. This page documents the exact rubric, the re-verification cadence, and what triggers a re-rank.
The five criteria we score on
Every criterion is scored 1–5 against what we observed during testing, then weighted into the final rank for the tool’s primary category.
Breadth of CS2 coverage
weight 25%How much of the CS2 economy the tool covers — every active case, every relevant marketplace, every pattern family, every skin family. Specialists score high if their niche is what the user is hunting.
Data accuracy & freshness
weight 25%Does the price match what's quoted on the source marketplaces right now? Are stale listings filtered out? When the tool publishes a number, can we reproduce it against the primary source?
UX and usability
weight 20%Can a CS2 trader get the answer they came for in under 30 seconds? Mobile responsive? Reasonable defaults? No paywalls in front of basic information that should be free?
Public API or programmatic access
weight 15%Is there a documented API a builder can use? Are the rate limits reasonable? Tools that are otherwise excellent but ship no API drop a tier in categories where downstream tooling matters (e.g. portfolio trackers, indexes).
Pricing transparency
weight 15%Are the spreads, fees, or paid-tier costs clearly stated? We penalise tools that hide their fee structure behind a sign-up wall or use opaque per-item spreads — even when those tools are fast and clean otherwise.
The five-step process
- 1
Load a real CS2 inventory
We never review a tool from screenshots or marketing copy. Every entry starts with us authenticating the tool against a live Steam inventory (or, for non-portfolio tools, against a real item lookup). If the tool can't be tested with real CS2 data, it doesn't get listed.
- 2
Score against the five criteria
Each criterion above is scored on a 1-5 scale based on what we saw during testing, weighted, and summed. The sum determines whether the tool earns a /best/ slot or shows up only on the broader directory.
- 3
Cross-check primary sources
For price-data tools, we sample 10 random items, look up each on the underlying marketplace's actual listing page, and check whether the tool's quoted number matches within a tolerance the market can produce. Discrepancies are written into the cons list.
- 4
Re-test every 90 days
Every tool entry carries a lastVerified date. We re-test on a rolling 90-day cycle. A tool that goes overdue gets a flag in its sidebar and is excluded from /best/ rankings until it's re-verified.
- 5
Publish, then watch for corrections
Once published, the review is open to corrections via our contact page. If the tool's team disputes a fact (pricing, feature claim, API status), we re-test the disputed point and update if warranted. Disagreements about subjective verdicts stay as the editor's call.
What triggers a re-rank
- A pricing change (e.g. a tool moves from free to freemium).
- A new feature lands that materially affects one of the five criteria.
- A new entrant in the category passes the listing bar.
- A reader-reported correction proves a fact in the review is wrong.
- The 90-day re-verification cycle comes due for that tool.
What we will not do
- Accept payment for inclusion or for a higher rank.
- Fabricate numeric review scores when there is no defensible measurement.
- Remove a documented con because the tool team asked us to.
- List a tool we have never personally used on a real CS2 inventory.
- Auto-rank our own tools at the top of a category.
Related: full editorial policy (governance, conflicts of interest, sponsored placements), current affiliate disclosures, and the chronological re-verification log.
Currently covering 17 tools across 10 categories. Found a tool we’re missing? Suggest it via the contribute page.