Understanding The Memorization Of Data Including Personal Identifiable Information in GPT-2 Model

  • by

The Berkeley Artificial Intelligence Research (BAIR) evaluated how large language models memorize and regurgitate their training data’s rare snippets in a recent paper. The focus was on GPT-2 and found that at least 0.1% of its text generations contain lengthy verbatim strings, “copy-pasted” from a document in its training set.

Such memorization would be a prominent issue for language models trained on private data such as on users’ emails because the model might inadvertently output a user’s sensitive conversations. Yet, multiple challenging regulatory questions are raised even for models trained on public data from the Web memorization of training data. This may range from misuse of personally identifiable information to copyright infringement.

Summary: https://www.marktechpost.com/2020/12/30/understanding-the-memorization-of-data-including-personal-identifiable-information-in-gpt-2-model/

Paper: https://arxiv.org/pdf/2012.07805.pdf?

submitted by /u/ai-lover
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *