Docs · PatentChecker
PatentChecker docs for evaluation, verification, and delivery risk
Use this lane for buyer evaluation, evidence verification, self-hosting, adapters, coverage scope, and the delivery-system surfaces behind vector and sequence IP review.
Patent IPDelivery riskVerificationSelf-serve
Viewing
PatentChecker evaluation guide
Need another page?
Search the docs
Jump to buyers, verification, demos, self-hosting, or adapters without opening the full docs tree first.
Key sections
Mobile navigationJump to sectionOpen
Mobile navigation
Jump to section
What Is Being Evaluated
PatentChecker is evaluated as a deterministic patent valuation and evidence system, not as a black-box prediction model. A correct run must prove:
- the input satisfied the locked schema
- normalization was deterministic
- scoring math was explicit
- missing data did not silently become a score
- the JSON and Markdown artifacts are reproducible
- the signed bundle can be verified offline
CodeFest Alignment
The EPO CodeFest 2026 challenge is "Patent and IP portfolio evaluation." The official page states that final code must be submitted by 28 April 2026 at 23:59 hrs CET, finalists are announced in June/July 2026, and the prize ceremony is on 16 September 2026.
The rules specify source code in machine-readable form, collateral material such as datasets and prompts, tools used to generate or compile the source code, and a document explaining the approach and how the solution satisfies the evaluation criteria. They do not publicly pin a GitHub-repository-only format; follow any direct participant instructions from EPO for the final upload channel.
The implementation is organized around the published judging dimensions:
| Criterion | PatentChecker evidence |
|---|---|
| Creativity and innovation | deterministic valuation plus cryptographic evidence bundles |
| Functionality and usability | value --source, patent-valuation ingest, patent-valuation diff, patent-valuation benchmark, lint, bundle, verify, explain CLI commands |
| Technical implementation | strict schemas, AJV validation, Ed25519 signatures, deterministic zip |
| Impact on IP valuation | component scores, signal contributions, weighting model, penalties, risk flags |
Explainability and scalability are treated as supporting proof: the solution exposes all math and runs offline over normalized signal records with deterministic hash hooks for caching.
Validation Layers
1. Schema Validation
Inputs are validated against
schemas/patent-valuation-inputs.v0.1.schema.json. Reports are validated against schemas/patent-valuation-report.v0.1.schema.json.Required properties, closed objects, enum values, numeric ranges, SHA-256 patterns, and date-time formats are enforced before artifacts are accepted.
2. Source Ingestion
patentchecker value --source validates patentchecker.patent_valuation_source records, emits canonical patentchecker.patent_valuation_inputs, scores the normalized input, and bundles the source, input, report, provenance, schemas, verifier, and signature in one deterministic run. patentchecker patent-valuation ingest exposes the same normalization step when a workflow needs the intermediate input file.The source schema covers structured source records for
uspto, epo, wipo, and mixed source systems.The ingester parses claims, citations, family/jurisdiction coverage, legal events, assignee history, licensing signals, and market signals into the scoring fields. It also supports a hash cache keyed by canonical source JSON, with cached outputs revalidated against the input schema before reuse.
This is not yet a live USPTO/EPO/WIPO API adapter. It is the deterministic normalization layer between extracted source data and the valuation engine.
3. Fail-Closed Scoring
The valuation score is emitted only when all 14 required signal slots are present. Incomplete inputs emit:
JSONReference snippet
{
"scoring_status": "incomplete",
"reason": "missing_data"
}No
valuation_score, valuation_tier, or score_band is present in that state. The report lists each missing path in missing_data[].4. Determinism
The test suite checks that fixed inputs produce fixed canonical JSON and stable hashes. The valuation compiler uses:
- sorted keys
- sorted CPC codes, jurisdictions, risk flags, and missing-data entries
- fixed numeric rounding
- fixed generated timestamps from input
- no network calls
- no random identifiers
5. Golden Fixtures
The repository includes three complete valuation examples:
| Fixture | Score | Tier | Confidence | Purpose |
|---|---|---|---|---|
tier_a_strategic_asset | 88.738410 | Tier A: strategic asset | 1.000000 | high-signal, high-market, broad family |
tier_b_competitive | 67.434330 | Tier B: competitive | 1.000000 | competitive asset with moderate market signal |
tier_c_low_leverage | 0.000000 | Tier C: low leverage | 1.000000 | expired, low-citation, narrow asset with heavy penalties |
The Tier C raw weighted score is
21.611240; the final score is 0.000000 after the -43 penalty total.6. Bundle Verification
Each valuation run can be packaged into:
TextReference snippet
bundle/
MANIFEST.json
MANIFEST.sig
SIGNER.txt
SUMMARY.txt
signer_pubkey.pem
replay_verify.sh
verify.py
artifacts/Verification checks:
- manifest canonical bytes
- Ed25519 signature over
MANIFEST.json - signer fingerprint policy
- manifest file ordering and uniqueness
- SHA-256 hash for every covered file
- bundled valuation source schema compliance when source artifacts are present
- bundled valuation input schema compliance
- bundled valuation report schema compliance
The top-level verifier fails closed unless a trusted signer is supplied with
--expected-fingerprint or the caller explicitly chooses --allow-any-signer.7. What-If Deltas
The scenario command verifies that score changes are explainable:
- base score
- scenario score
- delta
- band change
- new risk flags
- removed risk flags
This is used for "what if litigation appears?", "what if market expansion adds jurisdictions?", and "what if a patent expires?" demonstrations.
8. Report Diff
patentchecker patent-valuation diff compares two valuation reports or bundles and writes canonical JSON plus optional Markdown. It reports score deltas, component deltas, confidence deltas, band/tier changes, risk-flag changes, missing-data changes, penalty changes, and report hashes for before/after auditability.9. Benchmark
patentchecker patent-valuation benchmark runs dry-run valuation repeatedly over fixed inputs and writes structured runtime evidence. Timing values are environment-dependent, but the report includes hash-stability checks so benchmark runs also prove that repeated valuations produced identical report hashes.Example Runs
Value a patent and generate a signed bundle:
BashRunnable example
node dist/src/cli/patentchecker.js value \
examples/patent_valuation/tier_a_strategic_asset.inputs.json \
--signing-key tests/fixtures/evidence_kit/keys/ed25519_privkey.pem \
--out-dir /tmp/tier_a.run \
--emit-markdown \
--zip \
--overwriteValue a structured EPO-style source record directly:
BashRunnable example
node dist/src/cli/patentchecker.js value \
--source examples/patent_valuation/epo_lnp_source.v0.1.json \
--cache-dir /tmp/patentchecker-valuation-cache \
--signing-key tests/fixtures/evidence_kit/keys/ed25519_privkey.pem \
--out-dir /tmp/epo_lnp.run \
--emit-markdown \
--zip \
--overwriteVerify the bundle with a pinned signer:
BashRunnable example
node dist/src/cli/patentchecker.js verify \
/tmp/tier_a.run/bundle \
--expected-fingerprint ed25519:13ff8677e303086f5c731c3d932e486e94c328eba2720d0cfeefcdb7f4851880Explain the report:
BashRunnable example
node dist/src/cli/patentchecker.js explain /tmp/tier_a.run/bundleDemonstrate a scenario:
BashRunnable example
node dist/src/cli/patentchecker.js patent-valuation scenario \
--inputs examples/patent_valuation/tier_b_competitive.inputs.json \
--out /tmp/tier_b_expired.json \
--signal-grant_status expiredCompare two report versions:
BashRunnable example
node dist/src/cli/patentchecker.js patent-valuation diff \
--baseline examples/patent_valuation/tier_b_competitive.report.json \
--current examples/patent_valuation/tier_c_low_leverage.report.json \
--out /tmp/tier_b_to_c.diff.json \
--markdown /tmp/tier_b_to_c.diff.mdBenchmark fixed examples:
BashRunnable example
node dist/src/cli/patentchecker.js patent-valuation benchmark \
--inputs examples/patent_valuation/tier_a_strategic_asset.inputs.json \
--inputs examples/patent_valuation/tier_b_competitive.inputs.json \
--repeat 10 \
--out /tmp/valuation.benchmark.json \
--markdown /tmp/valuation.benchmark.mdTests
Focused coverage:
tests/patent_valuation.test.tstests/patent_valuation_cli.test.tstests/evidence_kit_bundle.test.tstests/contract_info.test.tstests/demo_crispr_ip_drift_demo.test.tstests/demo_opinion_apple_v_itc.test.tstests/sellable_gate_contract_versioning.test.ts
Run the focused set:
BashRunnable example
npm run build
node --test \
dist/tests/contract_info.test.js \
dist/tests/sellable_gate_contract_versioning.test.js \
dist/tests/demo_crispr_ip_drift_demo.test.js \
dist/tests/demo_opinion_apple_v_itc.test.js \
dist/tests/evidence_kit_bundle.test.js \
dist/tests/patent_valuation.test.js \
dist/tests/patent_valuation_cli.test.jsKnown Boundaries
Current valuation ingestion starts from structured source records. PatentChecker does not yet ship production live fetch adapters for USPTO, EPO, or WIPO valuation data in this path. That is the next coverage expansion; until then, the source schema and CLI enforce explicit extracted data, deterministic normalization, and missing-data discipline.