AI models are filled to the brim with bias, whether that's showing you a certain race of person when you ask for a pic of a criminal or assuming that a woman can't possibly be involved in a particular career when you ask for a firefighter. To deal with these issues, Sony AI has released a new dataset for testing the fairness of computer vision models, one that its makers claim was compiled in a fair and ethical way.
The Fair Human-Centric Image Benchmark (FHIBE, or "Fee-bee") "is the first publicly available, consensually collected, and globally diverse fairness evaluation dataset for a wide variety of human-centric computer vision tasks," according to Sony AI .
"A common misconception is that because computer vision is rooted in data and algorithms, it's a completely objective reflectio

The Register

PC World
PC World Business
Cowboy State Daily
Fast Company Technology
The Atlantic
New York Post
5 On Your Side Sports
FOX 10 Phoenix Crime