Researchers have found promising new ways to have AI models ignore copyrighted content, suggesting it may be possible to satisfy legal requirements without going through the lengthy and costly process of retraining models.

Training AI models requires huge quantities of data, which model-makers have acquired by scraping the internet without first asking for permission and by allegedly knowingly downloading copyrighted books.

Those practices have seen model makers sued in many copyright cases, and also raised eyebrows at regulators who wonder whether AI companies can comply with the General Data Protection Regulation right to erasure (often called the right to be forgotten) and the California Consumer Privacy Act right to delete.

The easiest way to address these issues is to retrain model

See Full Page