Zara Stone , FORBES CONTRIBUTOR
Nick Loui had never been asked about neural networks from a man wearing a violet leotard before. But the CEO and founder of CivicFeed didn't bat an eyelid as he thoughtfully answered the question. Given the context — a panel at Lightning In A Bottle — the dress code was A-OK. The question was part of a broader discussion that Loui was having with Dr. Nathan Walworth about the ethics of artificial intelligence and how engineers hold responsibility for coding a bias-free world.
Using AI for social good is something Loui, a former web-dev and marketer, feels passionately about. That’s why he founded Civic Feed in 2017, in order to make government actions more transparent. Instead of wading through hundreds of document to understand what those in power are actually doing, CivicFeed uses AI to sift through, identify and update you with the most relevant content regarding bills and politicians you are interested in.
As artificial intelligence becomes deeply ingrained in the creation of our products, concern about whose interests they serve has spawned a number of gatekeepers. Their goal: to keep human welfare paramount in the code. Enter the ethics committee — officially. Talks like this one are a good way to hash out some of the concerns everyday people have, but the tech titans are also holding court.
Over the last two years, a number of think tanks and working groups addressing this have been created. In academia, you can look to Harvard's AI Initiative, Oxford University's AI Code of Ethics project and AI4ALL at the University of California, to name a few. Then there’s the Partnership on AI nonprofit, founded in 2016 by a collection of tech companies including Apple, Facebook, Google, IBM, and Microsoft. Their task is to develop best practices, create inclusive networks and look at the social impact of AI.
This is a good start, but the buck doesn't end here.
“In the US we can have conversations about ethics as a thought leader but other countries don't necessarily have that,” said Loui. “Right now the U.S. is ahead, but I’m concerned that China and Russia could beat us as they take it very seriously.” The problem starts with where the intelligence comes from; the data. Loui says that just using ‘data’ to come to conclusions doesn't stop it being flawed. “Remember that data bias exists, so question people when they say their answer comes from data,” he said. “Ask them how it was compiled."
This has a knock on effect; issues with AI for predictive sentencing, AI that shows racial biasand AI that might filter what classes kids attend in schools based on the algorithmic likelihood they’ll perform better.
But along with hand-wringing comes the positive that it can bring, and the startups looking to tackle the root of the bias problem.
In California, engineer Laura Montoya founded Accel A.I. with the mission to democratize access to A.I. education for diverse people, with a focus on those from minority groups.“I run demystifying AI workshops for people with no prior experience,” she said. ‘They are interested in the buzz and want a better understanding."
She’s not alone. A number of companies are working on different programs designed for a wider social impact. “I’m excited about startups that are taking this technology and democratizing it,” said Loui.
Examples of this would be:
Do Not Pay: This AI-powered chatbot provides legal help with things like parking tickets and airline issues.
Cultivate AI: Uses natural language processing to help employers neutralize unconscious bias that might bleed into assessments or conversations.
For Walworth, a climate scientist and co-chair of the NEXUS Futurism Lab, AI is a powerful tool and one that needs to be treated with care. “The younger generations are being born into smart technology,” he said. “It's so hard to really be conscious of your unconscious biases that are being recorded by massive data farms. They’re learning who you are and telling us exactly what we want - to move product!” But for all that, he believes the benefits of AI use for medical and social care far outweigh the negatives. Plus, you can never put that genie back in the bottle.
His AI highlights include AI for Earth, a project from Microsoft. “They're leveraging AI to learn about the natural world,” he said. “The environment is degrading and its hard for us understand.” He highlighted one project where AI for Earth tracked mosquitoes. “They go into the Amazon rainforest and there’s an algorithm to identify from the way the wings beat what type of mosquito this is, and it helps them identify the vectors of something like Zika.
Back to the ethics problem: Some sort of bias is sadly inevitable in programming. “We humans all have a bias,” said computer scientist Ehsan Hoque, who leads the Human-Computer Interaction Lab at Rochester University. “There’s a study where judges make more favorable decisions after a lunch break. Machines have an inherent bias (as they are built by humans) so we need to empower users in ways to make decisions.”
For instance, Walworth's way of empowering his choices is by being conscious about what AI algorithms show him. “I recommend you do things that are counterintuitive,” he said. “For instance, read a spectrum of news, everything from Fox to CNN and The New York Times to combat the algorithm that decides what you see.” Use the Cambridge Analytica election scandal as an example here. Algorithms dictated what you’d see, how you’d see it and if more of the same got shown to you, and were manipulated by Cambridge Analytica to sway voters.
The move to a consciousness of ethical AI is both a top-down and bottoms up approach. “There’s a rising field of impact investing,” explained Walworth. “Investors and shareholders are demanding something higher than the bottom line, some accountability with the way they spend and invest money.”
Conversations like the ones held at Lightning In a Bottle, and those going on globally, from Google boardrooms to Starbucks counters are pivotal in changing the way we move forward with AI. Walworth said that he speaks at a lot of AI summits and they’re generally made up of an audience who’s already made their decisions. ‘Summits are expensive or [often] they’re invite only,” he said. “This [talk] is opening [discussions on AI] to a larger demographic.”
And the changes, they are a coming. In December, New York City created an algorithm accountability bill to address algorithmic discrimination of their constituents. We may never get rid of bias completely, but we can sure do a lot of things to mitigate it. Having more watchdogs is just the beginning.