## 2. Open versus closed modeling Depending on the scope of the project, you'll want to consider an open or a closed modeling approach. An open model is more generic, having few constraints, while a closed model is more specific, having more constraints. National and regional projects would benefit from an open modeling approach to retain the biggest amount of flexibility for downstream projects. Downstream projects, on the other hand, might want to go with a closed modeling approach to set a contract on what kind of data can be received in order to have an easier time building the systems (database design and so on). ### 2.1 Differences between open and closed modeling   **Example 1 – Open model**   {{render:openmodel-expanded-png}} Observe how most of the cardinalities are left at their default (grey) in this open model, with the explicitly constrained cardinalities in black.   **Example 2 – Closed model**   {{render:closedmodel-png}} Most cardinalities in this closed model have been constrained and unnecessary elements have been explicitly dissallowed. The advantages and disadvantages between open and closed modeling are summarized in the table below.   ||Open modeling|Closed modeling| |- |Pros|Forward compatibility|No need to support all elements| ||Focus on what must be supported|More specific models| ||More generic data fit|Smaller, straightforward models| |||More implementer feedback| |Cons|Implementers might need to support all of the elements|More versions of models| ||Larger, vaguer models|Only backwards compatibility| ||Less implementer feedback|New elements require new version| ### 2.2 Guidelines for selecting a modeling approach Our vision at Firely is to try to use an open modeling approach as much as possible - this way your models will be more generic and reusable. Elements that are not used can be freely ignored, while the mandatory ones will be present. In the end, however, the choice completely depends on your use case. #### 2.2.1 Constraining maximum cardinality For the reasons mentioned above, we try to avoid constraining maximum cardinality to 0. Depending on the use case you can even take this one step further by not constraining maximum cardinality at all. Instead, you could choose to slice the element and constrain the cardinality of the slice. This makes it easier for client systems to send their data to you (and others) without having to create custom interfaces to strip out unrelated data, yet you can still find the data you need. However, be careful with specifying everything in slices and leaving the rest open. When everything falls in the open slice it will still be valid and it will make no sense to validate against your profile. In the end it all depends on your use case. You may want to specify exact expectations for smaller use cases that have already been derived from national, regional or domain profiles. If you know you only need one occurrence for your use case, and the parties participating in that use case have no intention to support FHIR beyond that use case, it is defendable to make that explicit and limit the cardinality. #### 2.2.2 Constraining minimum cardinality Another consideration is to use the MustSupport flag to mark elements that are relevant to your profile, instead of setting the minimum cardinality to 1. Of course this will depend on the meaning of MustSupport in your profile (which should be clarified in your implementation guide).