Most data models are designed for the writer. That's backwards.
The data modeling literature is dominated by normalization rules that were correct in 1992 and are only half-right in 2026. The original goal of normalization was to minimize storage and prevent update anomalies, both of which mattered enormously when storage was expensive and single-write transactional databases were the whole game. Today, storage is effectively free, and most analytical data is read far more often than it's written. Denormalization is not a sin; in analytical systems it's frequently the correct answer.
What most teams get wrong is inheriting the OLTP normalization mindset when designing their warehouse. They build an elegant third-normal-form schema, hand it to analysts, and then watch every dashboard query become a seven-table join that's slow and error-prone. The better pattern for warehouses is explicit: dimensional modeling (Kimball) for the query layer, narrow wide tables where the access pattern is clear, and denormalization wherever it makes the reader's life easier. The write-time cost is borne by pipelines, which run once. The read-time cost is borne by every query, which runs thousands of times.
The practical rule for 2026: model for the reader. OLTP systems get normalized. Warehouses get dimensional. Feature stores get flattened. Analytical APIs get the schema that makes the consumer's query simplest. Anything else is an aesthetic preference masquerading as engineering.