Been injecting prompts to test the safety of large language models? Better call Saul black hat Existing US laws tackling those illegally breaking computer systems don't accommodate modern large language models (LLMs) and can open researchers up to prosecution for what ought to be sanctioned security testing, say a trio of Harvard scholars. ...
Related Articles
Don't miss out on breaking stories and in-depth articles.