Minimal test model (Revit + JSON)
This project uses a minimal Revit test model and a corresponding minimal DAQS JSON export to teach, debug, and validate rules.
The goal of this model is not realism — it is control.
Why a minimal test model exists
When learning or debugging DAQS rules, large project models are a problem:
- Too much data
- Too many categories
- Too many parameters
- Hard to see why a rule behaves a certain way
A minimal model gives you:
- Predictable results
- Clear cause → effect
- Fast iteration
- Confidence that a rule actually works
If a rule does not work on a minimal model, it will not work in production.
What “minimal” means in practice
This model contains only what is needed to demonstrate:
- Family → FamilySymbol → FamilyInstance relationships
- Shared parameters via GUIDs
- Type-level vs instance-level data
- Category filtering
- Assembly Code filtering
- Parameter existence vs parameter value
Everything else is noise — and intentionally excluded.
Core elements in the minimal model
1. One editable NLRS family
{
"type": "Family",
"name": "NLRS_32_DO_WB_binnendeur hout - type 8_gen",
"values": {
"isEditable": true,
"familyCategory": {
"label": "OST_Doors",
"type": "Model"
}
}
}
Why this matters:
- Starts with
NLRS→ naming convention tests - Editable → excludes system families
- Model category → excludes annotation content
This family is deliberately valid.
2. One FamilySymbol (type) with classification data
{
"type": "FamilySymbol",
"parent": { "type": "Family" },
"values": {
"category": { "label": "OST_Doors" },
"assemblyCode": "32.31"
}
}
This symbol demonstrates:
- Category lives on the type
- Assembly Code lives on the type
- Classification is type-level data
This allows testing:
- Category filters
- Assembly Code filters
- Regex-based scope rules
3. One FamilyInstance (placed element)
{
"type": "FamilyInstance",
"parent": { "type": "FamilySymbol" },
"values": {
"mark": "61",
"levelId": 311
}
}
This instance demonstrates:
- Placement in the model
- Instance-level parameters (
mark) - Reference to its type via
parent.id
This is the object most validation rules ultimately target.
4. A nested subcomponent (real-world complexity)
{
"type": "FamilyInstance",
"superComponentId": 616521
}
Why this is included:
- Tests super-/sub-component behaviour
- Ensures rules do not accidentally double-report
- Reflects real Revit families (doors + frames)
This prevents “toy-model optimism”.
Shared parameters (explicit and intentional)
Two shared parameters are defined as Parameter objects:
{
"type": "Parameter",
"values": {
"guid": "8fe8f5ce-4979-4679-b5e0-ccfb362b9059",
"name": "NLRS_C_brandwerendheid"
}
}
{
"type": "Parameter",
"values": {
"guid": "beca98b3-5207-4cde-a26b-7e9797c4eb26",
"name": "NLRS_C_bouwwerk_laag"
}
}
Why this matters:
- Parameters exist in the project
- Some elements have values
- Some elements do not
- Some objects reference the parameter but have no value
This allows you to test:
- Parameter existence
- Parameter binding
- Parameter value checks
- Model-aware shared-parameter logic
Intentional “noise” objects (important)
The JSON also contains objects like:
CadFileLinkType/LinkInstanceRoomSpaceViewGridMaterialMechanicalSystemProjectInfo
These are included on purpose.
Why:
- Real projects are not clean
- Rules must explicitly filter scope
- Accidental matches must be avoided
- Category-based and type-based filtering must be correct
If your rule accidentally hits these objects, your filter is wrong.
How to use this model when writing rules
Step 1 — Start with the smallest possible filter
Example:
$[type = "FamilyInstance"]
Verify:
- How many objects are returned
- Which ones they are
Step 2 — Add one constraint at a time
- Filter by category (via symbol)
- Filter by Assembly Code
- Filter by family name
- Filter by editability
Never add two conditions at once unless you already trust both.
Step 3 — Intentionally break the model
To test rule robustness:
- Remove a parameter binding
- Clear a parameter value
- Change an Assembly Code
- Rename the family
A good rule:
- Fails when it should
- Explains why
- Does not silently pass
Rule of thumb
If a rule only works on a “perfect” model, it is not a good rule.
The minimal test model exists to prove correctness under controlled imperfection.
Key takeaway
This minimal model is:
- Small enough to understand
- Rich enough to be realistic
- Stable enough to debug against
- Honest about real-world data issues
Use it before Playground. Use it before Production. Use it whenever a rule feels “weird”.