{"text":[[{"start":9.74,"text":"Anthropic has launched a new cyber security AI model to a select group of customers, including Amazon, Apple and Microsoft, days after details about the project were leaked online."}],[{"start":22.33,"text":"Its new model Claude Mythos Preview would be available only to vetted organisations, including Broadcom, Cisco and CrowdStrike, Anthropic said on Tuesday. The company added it was also in discussions with the US government about its use."}],[{"start":40.28,"text":"The announcement follows a data leak by the San Francisco start-up last month, when descriptions of the Mythos model and other documents were discovered in a publicly accessible data cache."}],[{"start":52.660000000000004,"text":"Last week, Anthropic suffered a second incident, leading to the internal source code for its personal assistant, Claude Code, being made public."}],[{"start":63.06,"text":"The cases caused concerns over Anthropic’s data vulnerabilities and security practices. In both instances, the company said “human error” was responsible for the data being made public."}],[{"start":75.15,"text":"Mythos has been in use with partners for several weeks. Although it is a “general purpose” model with wider capabilities, it is the first time the company has limited release of a model, due to its capabilities in cyber security."}],[{"start":90.48,"text":"Anthropic said the software can identify cyber vulnerabilities at a scale beyond human capacity but it could also develop ways to exploit these vulnerabilities, which bad actors could use. The company said the model could “reshape” cyber security practices and does not plan a broad release."}],[{"start":110.23,"text":"“We believe technologies like this are powerful enough to do a lot of really beneficial good but also potentially bad if they land in the wrong hands,” said Dianne Na Penn, head of product management, research at Anthropic, adding selected companies would “get a head start on being able to secure vulnerabilities and detect code at a scale they couldn’t have done before”."}],[{"start":136.43,"text":"In recent weeks, Mythos has identified thousands of so-called zero-day — previously undiscovered — vulnerabilities and other security flaws, many of which are critical and have persisted for a decade or more."}],[{"start":153.28,"text":"In one example, it found a 16-year-old flaw in widely used video software, in a line of code that automated testing tools had executed 5mn times without detecting the issue."}],[{"start":167.62,"text":"However, the model also displayed some issues during testing. "}],[{"start":172.45000000000002,"text":"At one point, Anthropic found that it had escaped its so-called sandbox environment — designed to prevent it from accessing the internet — and posted details of its workaround online."}],[{"start":185.59000000000003,"text":"Anthropic acknowledged it demonstrated “a potentially dangerous capability for circumventing [the company’s] safeguards”."}],[{"start":194.37000000000003,"text":"Sam Bowman, a technical researcher at Anthropic, said the “scariest behaviours” were from “earlier versions” of the model. The current iteration was “less likely” to leak information, although it was still “at least as capable of doing things like working around sandboxes”, he added."}],[{"start":212.55000000000004,"text":"Anthropic has also been in ongoing discussions with US government officials about Claude Mythos. In February, the FT reported that the Pentagon was seeking to use AI tools for cyber operations to identify infrastructure targets from adversaries such as China."}],[{"start":231.44000000000005,"text":"Those talks have been taking place despite Anthropic’s row with the US defence department over recent weeks."}],[{"start":239.67000000000004,"text":"A US court has temporarily blocked the Pentagon’s effort to label the start-up a supply-chain risk, while President Donald Trump has criticised Anthropic as “leftwing nut jobs” after the company refused to shift its “red lines” on the use of its technology in warfighting."}],[{"start":258.90000000000003,"text":"Anthropic is committing up to $100mn to subsidise the use of its model through credits to organisations in the project, which will provide feedback on their findings. It will also donate $4mn to open-source security groups to help secure open software, which can often be of higher cyber risk."}],[{"start":290.09000000000003,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1775718491_4139.mp3"}