With companies rapidly inculcating AI, it is natural to wonder whether the federal government will be able to formulate regulatory frameworks at the same pace. The Federal Trade Commission (FTC) has recently alerted companies of their legal framework to ensure truth, fairness and equity in developers and users of AI.
The FTC has 3 laws that can be applied to AI are:
- Equal Credit Opportunity Act (ECOA) makes it illegal to use an unfair algorithm leading to discrimination.
- Fair Credit Reporting Act (FCRA) does not allow using an algorithm to deny employment or housing opportunities.
- Section 5 of the FTC Act bars companies from engaging in unethical and fraudulent activities.
The FTC has the following recommendations for companies.
- Understand Your Data Foundation: Incomplete data may result in discrimination against legally protected groups. Thus, AI models should address data gaps to ensure against discrimination.
- Test Your Outcomes: Along with addressing data gaps, companies need to ensure that their AI algorithm does not cause discrimination.
- Enable Transparency and Independence: It is advisable for companies to use transparent frameworks and independent standards to limit the need for comprehensive testing.
- Underpromise What your Algorithm can do: Advertise that your company’s AI algorithm accounts for biases both online and offline.
- Disclose How You Use Data: The FTC can levy heavy fines and require company to delete the biased data and AI models using the data, even in case of an unintended bias.
- Do More Good Than Harm: The FTC gauges fairness by weighing pros and cons of a practice. If your AI model causes substantial harm, FTC will most likely challenge its use.
- Hold Yourself Accountable-Or The FTC Will Do It For You: The FTC takes allegations of discrimination seriously. So, its imperative to hold yourself accountable for your algorithm’s performance.
Please click here for the original article that appeared in Forbes.