Strongly Recommended Download: AI Now Report 2018 - The AI Now Institute NYU (


The AI Now Institute at New York University is an interdisciplinary research institute dedicated to understanding the social implications of AI technologies. It is the first university research center focused specifically on AI’s social significance. Founded and led by Kate Crawford and Meredith Whittaker, AI Now is one of the few women-led AI institutes in the world. 

AI Now works with a broad coalition of stakeholders, including academic researchers, industry, civil society, policy makers, and affected communities, to identify and address issues raised by the rapid introduction of AI across core social domains. 

AI Now produces interdisciplinary research to help ensure that AI systems are accountable to the communities and contexts they are meant to serve, and that they are applied in ways that promote justice and equity. The Institute’s current research agenda focuses on four core areas: bias and inclusion, rights and liberties, labor and automation, and safety and critical infrastructure. 

Our most recent publications include: 

● Litigating Algorithms, a major report assessing recent court cases focused on government use of algorithms 
● Anatomy of an AI System, a large-scale map and longform essay produced in partnership with SHARE Lab, which investigates the human labor, data, and planetary resources required to operate an Amazon Echo 
● Algorithmic Impact Assessment (AIA) Report, which helps affected communities and stakeholders assess the use of AI and algorithmic decision-making in public agencies 
● Algorithmic Accountability Policy Toolkit, which is geared toward advocates interested in understanding government use of algorithmic systems We also host expert workshops and public events on a wide range of topics. 

Our workshop on Immigration, Data, and Automation in the Trump Era, co-hosted with the Brennan Center for Justice and the Center for Privacy and Technology at Georgetown Law, focused on the Trump Administration’s use of data harvesting, predictive analytics, and machine learning to target immigrant communities. 

The Data Genesis Working Group convenes experts from across industry and academia to examine the mechanics of dataset provenance and maintenance. Our roundtable on Machine Learning, Inequality and Bias, co-hosted in Berlin with the Robert Bosch Academy, gathered researchers and policymakers from across Europe to address issues of bias, discrimination, and fairness in machine learning and related technologies. 

Our annual public symposium convenes leaders from academia, industry, government, and civil society to examine the biggest challenges we face as AI moves into our everyday lives. The AI Now 2018 Symposium addressed the intersection of AI ethics, organizing, and accountability, examining the landmark events of the past year. 

Over 1,000 people registered for the event, which was free and open to the public. Recordings of the program are available on our website. More information is available at

Download the full December 2018 Report here:


At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? 

Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues: 

1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected 
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression 
3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures 
4. Unregulated and unmonitored forms of AI experimentation on human populations 
5. The limits of technological solutions to problems of fairness, bias, and discrimination Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. 

We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based. 

The AI accountability gap is growing: The technology scandals of 2018 have shown that the gap between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller. 

There are several reasons for this, including a lack of government regulation, a highly concentrated AI sector, insufficient governance structures within technology companies, power asymmetries between companies and the people they serve, and a stark cultural divide between the engineering cohort responsible for technical research, and the vastly diverse populations where AI systems are deployed. 

These gaps are producing growing concern about bias, discrimination, due process, liability, and overall responsibility for harm. This report emphasizes the urgent need for stronger, sector-specific research and regulation. 

AI is amplifying widespread surveillance: The role of AI in widespread surveillance has expanded immensely in the U.S., China, and many other countries worldwide. This is seen in the growing use of sensor networks, social media tracking, facial recognition, and affect recognition. These expansions not only threaten individual privacy, but accelerate the automation of surveillance, and thus its reach and pervasiveness. 

This presents new dangers, and magnifies many longstanding concerns. The use of affect recognition, based on debunked pseudoscience, is also on the rise. Affect recognition attempts to read inner emotions by a close analysis of the face and is connected to spurious claims about people’s mood, mental health, level of engagement, and guilt or innocence. 

This technology is already being used for discriminatory and unethical purposes, often without people’s knowledge. Facial recognition technology poses its own dangers, reinforcing skewed and potentially discriminatory practices, from criminal justice to education to employment, and presents risks to human rights and civil liberties in multiple countries. 

Download the full free report here:
    • 1
    Francisco Gimeno - BC Analyst AI technology is growing and expanding, and although we are yet some time from the singularity, it's time for those working with this technology to work with very practical and urgent issues, such as control, fairness in the AI algorithms, accountability, etc. This is not any technology, but a totally disrupting one which can change everything into dystopian societies or into a better world.