Brad Smith, Microsoft President, states the company applies its own internal safety guardrails to its AI systems regardless of the external regulatory environment.
Argues that safety must start with the creators of the technology, positioning proactive corporate responsibility as a foundation.
Highlights protecting children as an absolute prerequisite; failure to do so means a company "doesn't deserve to be in business."
Identifies other key safety pillars as protecting individual privacy and ensuring cybersecurity, especially for enterprises.
Expects continued global divergence in regulatory approaches to AI in the near term, with no convergence expected in the next couple of years.
Suggests potential for some global regulatory convergence in the longer term, beyond a couple of years.
The commentary frames Microsoft's AI strategy as prioritizing built-in safety and ethical compliance as a core business requirement.