UB receives $800,000 NSF/Amazon grant to improve AI fairness in foster care – UB Now: News and views for UB faculty and staff – University at Buffalo…

Posted: February 4, 2020 at 9:52 am


without comments

A multidisciplinary UB research team has received an $800,000 grant to develop a machine learning system that could eventually help caseworkers and human services agencies determine the best available services for the more than 20,000 youth who annually age out of foster care without rejoining their families.

The National Science Foundation and Amazon, the grants joint funders, have partnered on a program called Fairness in Artificial Intelligence (FAI) that aims to address bias and build trustworthy computational systems that can contribute to solving the biggest challenges facing modern societies.

Over the course of three years, the UB researchers will collaborate with the Hillside Family of Agencies in Rochester, one of the oldest family and youth nonprofit human services organizations in the country, and a youth advisory council made up of individuals who have recently aged out of foster care to develop the tool. They will also consult with national experts across specializations to inform this complex work.

Researchers will use data from the Administration on Children and Families (ACF) federally mandated National Youth in Transition Database (NYTD) and input from collaborators to inform their predictive model. Each state participates in NYTD to report the experiences and services used by youth in foster care.

The teams three-pronged goal is to use the experiences of youth, case workers and experts in the foster care system to identify the often hard-to-find biases in data used to train machine learning models, to obtain multiple perspectives on fairness with respect to decisions about services and to then build a system that can more equitably and efficiently deliver services.

Social scientists have long considered questions of fairness and justice in societies, but beginning in the early part of the 21st century, there was growing awareness of how computers might be using unfair algorithms, according to Kenneth Joseph, assistant professor in the Department of Computer Science and Engineering and one of the co-investigators of the project.

Joseph is an expert in machine learning who focuses much of his research on better understanding how biases work their way into computational models, and how to understand and address the social and technical processes responsible for doing so.

Machine learning is any computer program that can help extract patterns in data. Unsupervised learning identifies patterns, while supervised learning tries to predict something based on those patterns.

Our supervised problem is to take the information available about a particular child and make a prediction about how to allocate services, says Joseph. Our goal is to help social workers identify youth who might benefit from preventative services, while doing so in a manner that participants within the system feel is fair and equitable.

We also want our approach to have applications beyond foster care, so that eventually our approach can be used in other public service settings.

A machine learning models greatest asset, however, might also be its greatest liability. Machine learning algorithms learn from no other source other than the data theyre provided. If the original data is biased, Joseph says the algorithm will learn and echo those biases.

For instance, models for loan distribution derived from data that gives income- and geography-based preferences to applicants could be using information with inherent race, ethnicity and gender disparities.

There are many ways algorithms can be unfair, and very few of them have anything to do with math, says Joseph.

Finding and correcting those biases raises questions about using computers to make decisions affecting what is already a vulnerable population.

By age 19, 47% of foster care youth who have not been reunited with their families have not finished high school, 20% have experienced homelessness and 27% of males have been incarcerated, according to the AFCs Childrens Bureau.

But Melanie Sage, assistant professor in the School of Social Work and another of the grants co-principal investigators, says this project is about providing caseworkers with an additional tool to help inform not replace their decision-making.

We never want algorithms to replace the decisions made by trained professionals, but we do need information about how to make decisions based on likely outcomes and what the data tell us about pathways for children in foster care, she says.

Sage says their work on this grant is critical given the generational impact caseworkers and agencies have on the lives of foster youth.

When a determination is made that services should be provided for protection because kids are not better off with their families, those kids are deserving of the best services and interventions that the child welfare system can offer, she says. This research ideally gives us another tool that helps make that happen.

The projects other co-investigators are Varun Chandola, assistant professor of computer science and engineering; Huei-Yen Chen, assistant professor of industrial and systems engineering; and Atri Rudra, associate professor of computer science and engineering.

See the article here:

UB receives $800,000 NSF/Amazon grant to improve AI fairness in foster care - UB Now: News and views for UB faculty and staff - University at Buffalo...

Related Posts

Written by admin |

February 4th, 2020 at 9:52 am

Posted in Machine Learning




matomo tracker