
that’s true The United States has made progress on data protection thanks to the passage of several laws such as the California Consumer Privacy Act (CCPA) and non-binding documents such as the blueprint for the AI Bill of Rights. However, there are currently no standard regulations dictating how tech companies should mitigate bias and discrimination in AI.
As a result, many companies are lagging behind in building ethical, privacy-first tools. Nearly 80% of his U.S. data scientists are male and 66% are white. This demonstrates an inherent lack of diversity and demographic representation in the development of automated decision-making tools, often leading to skewed data results.
For technology companies to take everyone into consideration when creating and modifying their products, their design review process needs to be significantly improved. Otherwise, organizations risk exposing their customers to competition, damaging their reputation, and incurring serious lawsuits. According to IBM, about 85% of IT professionals believe consumers will choose companies that are transparent about how their AI algorithms are created, managed and used. We expect this number to grow as more users continue to oppose harmful and biased technology.
So what should companies keep in mind when analyzing prototypes? Here are four questions that development teams should ask themselves.
Have you excluded all types of bias from your prototype?
As we know, technology has the power to revolutionize society, but it will ultimately fail if it does not benefit everyone equally.
To build effective, unbiased technology, AI teams should develop a list of questions to ask during the review process that helps identify potential problems in the model.
There are a number of methodologies that AI teams can use to evaluate their models, but before doing so, it is important to evaluate the end goal and whether there are any groups that may be disproportionately affected by the consequences of using AI. Evaluating is important.
For example, AI teams should consider that the use of facial recognition technology may inadvertently discriminate against people of color. This happens very often in AI algorithms. A 2018 study conducted by the American Civil Liberties Union showed that Amazon’s facial recognition inaccurately matched her mugshot with 28 members of the US Congress. A staggering 40% of false matches were people of color, despite only making up 20% of Congress.
By asking challenging questions, AI teams can improve their models and find new ways to prevent these scenarios from occurring. For example, in-depth research can help determine if more data needs to be examined, or if a third party, such as a privacy expert, is required to review a product.
Plot4AI is a great resource for those just getting started.