What Is a Model Card?
Origin
Model cards were introduced by Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru in the 2019 paper "Model Cards for Model Reporting" published at the ACM Conference on Fairness, Accountability, and Transparency (FAccT).
The paper proposed model cards as short documents accompanying trained ML models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups and intersectional groups. The concept draws inspiration from nutrition labels and datasheets for electronic components.
Purpose
- Provide transparency about a model's intended use, limitations, and ethical considerations.
- Enable downstream users to make informed decisions about whether a model is appropriate for their use case.
- Document evaluation results across different demographic and intersectional groups.
- Facilitate regulatory compliance and audit readiness.
- Create accountability by recording training data provenance, evaluation methodology, and known limitations.
Standard Sections
The Mitchell et al. (2019) paper defines the following standard sections for a model card. Each section addresses a specific aspect of model documentation.
| Section | Description |
|---|---|
| Model Details | Basic information: developer, model date, model version, model type, architecture, training algorithms, paper or resource citation, license, contact information. |
| Intended Use | Primary intended uses and users, out-of-scope use cases. Defines the boundaries of appropriate deployment. |
| Factors | Relevant factors including groups (demographic or phenotypic), instrumentation (hardware, software, sensors), and environment (deployment conditions). |
| Metrics | Performance measures chosen, motivation for those metrics, and decision thresholds used. |
| Evaluation Data | Datasets used for evaluation, motivation for choosing them, and preprocessing applied. |
| Training Data | Datasets used for training, including size, provenance, collection methodology, and preprocessing details. |
| Quantitative Analyses | Unitary results (single metric values) and intersectional results (performance across subgroups and their intersections). |
| Ethical Considerations | Ethical concerns related to the model, including potential harms, sensitive use cases, and mitigations applied. |
| Caveats and Recommendations | Additional concerns, known limitations, and recommendations for model users and future work. |
Who Writes Model Cards?
Model cards are typically authored by the model developers or the team responsible for training and deploying the model. In practice, this includes:
- ML engineers who trained the model and can document architecture, training data, and performance metrics.
- Data scientists who designed the evaluation methodology and can report on disaggregated performance.
- Product managers who define intended use cases and out-of-scope applications.
- Ethics and policy teams who assess potential harms, fairness implications, and regulatory requirements.
Where Model Cards Are Published
- Hugging Face Model Hub -- Model cards are a required part of every model repository on Hugging Face, stored as README.md files using a structured metadata format.
- TensorFlow Model Garden -- Google publishes model cards for models in the TensorFlow ecosystem.
- Research papers -- Model cards are often included as appendices or supplementary materials in ML publications.
- Corporate AI transparency reports -- Companies like Google, Microsoft, and Meta publish model cards as part of their responsible AI practices.
- Government and regulatory filings -- Model documentation is increasingly required by regulations such as the EU AI Act.
Next step: View the Model Card Template to see the recommended structure and fill in documentation for your own models.