Safety
-
Blog
Sam Altman exits OpenAI commission for AI safety to create ‘independent’ oversight – Computerworld
The safety of OpenAI’s technology also has been called into question under Altman, after reports surfaced that the company allegedly used illegal non-disclosure agreements and required employees to reveal whether they had been in contact with authorities, as a way for it to cover up any security issues related to AI development. Effect of the move as yet unknown It…
Read More » -
Blog
Baby Safety Tips ER Doctors Want You to Know
There’s nothing that an exhausted new parent loves more than a hot cup of coffee after a terrible night. But be extra careful with that mug of liquid treasure: ER docs say that spilled coffee and other hot liquids have been known to lead to baby burns. “A common source of significant injuries for us is the stove. Not the…
Read More » -
Blog
New international AI treaty is a “welcomed step” to improving safety, reducing potential harms
The UK, EU, US and other countries have signed the first international, legally-binding treaty on the safe use of AI in a move aimed at encouraging innovation while mitigating potential risks to human rights. The new framework agreed by the Council of Europe commits them to collective action to manage AI products and protect the public from potential misuse. There…
Read More » -
Blog
US safety regulators say it’s time to investigate Shein and Temu
Safety regulators are urging the US Consumer Product Safety Commission (CPSC) to investigate the ultracheap e-commerce platforms Shein and Temu. In a statement published Tuesday, two CPSC commissioners say Shein and Temu “raise specific concerns,” including reports that “deadly baby and toddler products are easy to find on these platforms.” The statement cites last month’s report from The Information, which…
Read More » -
Blog
OpenAI and Anthropic Sign Deals With U.S. AI Safety Institute
OpenAI and Anthropic have signed agreements with the U.S. government, offering their frontier AI models for testing and safety research. An announcement from NIST on Thursday revealed that the U.S. AI Safety Institute will gain access to the technologies “prior to and following their public release.” Thanks to the respective Memorandum of Understandings — non-legally binding agreements — signed by…
Read More » -
Blog
OpenAI, Anthropic agree to get their models tested for safety before making them public – Computerworld
The NIST has also taken other measures, including the formation of an AI safety advisory group in February this year that encompassed AI creators, users, and academics, to put some guardrails on AI use and development. The advisory group named the US AI Safety Institute Consortium (AISIC) has been tasked with coming up with guidelines for red-teaming AI systems, evaluating AI…
Read More » -
Blog
California’s contentious AI safety bill gets closer to becoming a law – Computerworld
She argued that SB 1047 “will harm our emerging AI ecosystem,” particularly affecting sectors that are already at a disadvantage compared to major tech companies, such as the public sector, academia, and smaller tech firms. Even some AI researchers and developers who support the idea of regulation have criticized the bill. Andrew Ng, a prominent AI entrepreneur and former head…
Read More » -
Blog
California State Assembly passes sweeping AI safety bill
The California State Assembly has passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), Reuters reports. The bill is one of the first significant regulations of artificial intelligence in the US. The bill, which has been a flashpoint for debate in Silicon Valley and beyond, would obligate AI companies operating in California to implement a…
Read More » -
Blog
The New Grok Image Generator Ignores Nearly All Safety Guardrails & It’s Scary
With an early beta release of Grok-2, Elon Musk-led xAI announced that it’s integrating an image generation model into its AI service. The image generation is powered by Flux, a new open-source model developed by Black Forest Labs. Now, xAI’s Grok image generator recently came under fire for seemingly having no safety guardrails to prevent users from generating potentially harmful…
Read More » -
Blog
OpenAI exec says California’s AI safety bill might slow progress
In a new letter, OpenAI chief strategy officer Jason Kwon insists that AI regulations should be left to the federal government. As reported previously by Bloomberg, Kwon says that a new AI safety bill under consideration in California could slow progress and cause companies to leave the state. A federally-driven set of AI policies, rather than a patchwork of state…
Read More »