agree
-
Blog
OpenAI, Anthropic agree to get their models tested for safety before making them public – Computerworld
The NIST has also taken other measures, including the formation of an AI safety advisory group in February this year that encompassed AI creators, users, and academics, to put some guardrails on AI use and development. The advisory group named the US AI Safety Institute Consortium (AISIC) has been tasked with coming up with guidelines for red-teaming AI systems, evaluating AI…
Read More »