The ethics of artificial intelligence is a field still in its infancy. Unlike the popular science fiction portrayal of evil machines, unethical AI typically arises from biased data sets or unconscious human bias in the programming of machine learning models. The consequences of this can be very real, as seen in Amazon's failed attempt at a hiring portal that systematically discriminated against women.
Despite expert consensus that ethics is a critical aspect of the long-term development of AI, there has been pushback against expanding the sector. For example, after acquiring Twitter, Elon Musk quickly fired the company's META team, a group of AI researchers focused on creating a more ethical user experience. Musk instead plans to open-source Twitter's algorithm, a plan that some experts believe is overambitious.
Other organizations, such as UNESCO, have declared ethical machine learning methods an international priority. IBM claims their ethics field focuses to ensure the "technology must be transparent and explainable," yet inherently the way deep learning models work is against that. While researchers can analyze input data and model development, the actual process of how a deep learning model makes decisions is still largely a black box.
Another ethical dilemma arises when considering how to train these models. Deep learning algorithms of the scale used by industry giants require enormous amounts of data to be effectively trained. For relatively innocent information—text recognition or number scanning—obtaining large quantities of data is relatively straightforward. For more sensitive data, however, the task of ethical collection grows blurrier. Facial recognition models have been found to use data from video surveillance, often without passerbys' consent—or even their knowledge.
So in an industry dominated by Big Data, how can small disruptors make a difference? Startups host the advantage that they are disconnected from larger corporate agendas and funding, and could thus be impartial players in judging what "fair" really means. As the companies we use daily shift to more AI centered approaches, ethical systems will be the crux to reducing discrimination in job, housing, and financial markets.
Having safe AI affects our personal security—an extension of our physical safety in our technologically driven world. It also has an impact on self-actualization. When used knowledgeably, AI can supplement creative endeavors, enhance personal decisions, and offer data driven direction. As such, it is crucial the benefits of artificial intelligence are distributed equitably. Further, the trillions of artificial neurons required for the largest models present an enormous environmental cost, meaning industry must be conscious in using the technology.
As AI grows less distinguishable from human work, these questions will grow exponentially more important. Disagree? Parts of this essay are computer generated. The future is closer than we think...maybe it's already here.
Comentarios