Research Team Develops AI Data Leak Prevention Guidelines

Symbolic picture for the article. The link opens the image in a large view.
FAU/Georg Pöhlein

A research team, including Prof. David Blumenthal from AIBE, has developed guidelines to prevent AI data leaks. These leaks occur when information is improperly transferred between training and test data, leading to unreliable results.

Prof. David emphasized that while popular ML frameworks make workflows easier, they also increase the risk of incorrect applications. The team created seven key questions to guide the construction of ML models, ensuring robust and reproducible research.

Their findings will be published in Nature Methods on August 9, 2024. For more details, contact Prof. Blumenthal at david.b.blumenthal@fau.de.