Should AI systems be regulated? Is anticipative regulation justified for certain classes of risks and/or AI applications? Should lawmakers wait before they regulate, to safeguard innovation incentives and enable experimentation? In what areas are regulatory sandboxes helpful? Are there redlines that should apply across the board to all AI systems? Can we envision regulatory compliance by design, as in Asimov’s fictions?