The AI Office should be able to take the necessary actions to monitor the effective implementation of and compliance with the obligations for providers of general purpose AI models laid down in this Regulation. The AI Office should be able to investigate possible infringements in accordance with the powers provided for in this Regulation, including by requesting documentation and information, by conducting evaluations, as well as by requesting measures from providers of general purpose AI models. In the conduct of evaluations, in order to make use of independent expertise, the AI Office should be able to involve independent experts to carry out the evaluations on its behalf. Compliance with the obligations should be enforceable, inter alia, through requests to take appropriate measures, including risk mitigation measures in case of identified systemic risks as well as restricting the making available on the market, withdrawing or recalling the model. As a safeguard in case needed beyond the procedural rights provided for in this Regulation, providers of general-purpose AI models should have the procedural rights provided for in Article 18 of Regulation (EU) 2019/1020, which should apply by analogy, without prejudice to more specific procedural rights provided for by this Regulation.