1. In addition to the obligations listed in Article C, providers of general-purpose AI models with systemic risk shall:
(a) perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identify and mitigate systemic risk;
(b) assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, placing on the market, or use of general purpose AI models with systemic risk;
(c) keep track of, document and report without undue delay to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;
(d) ensure an adequate level of cybersecurity protection for the general purpose AI model with systemic risk and the physical infrastructure of the model.
2. Providers of general purpose AI models with systemic risk may rely on codes of practice within the meaning of Article E to demonstrate compliance with the obligations in paragraph 1, until a harmonised standard is published. Compliance with a European harmonised standard grants providers the presumption of conformity. Providers of general purpose AI models with systemic risks who do not adhere to an approved code of practice shall demonstrate alternative adequate means of compliance for approval of the Commission.
3. Any information and documentation obtained pursuant to the provisions of this Article, including trade secrets, shall be treated in compliance with the confidentiality obligations set out in Article 70.