top of page

$3M Darpa Grant Funds A.I. Research Model Named: LUCIFER


Imagine that you are a doctor managing the emergency room of a large hospital. You suddenly get a call that there has been a mass shooting at a concert a few miles away. In 20 minutes, you will be responsible for triaging over 200 patients with a range of injuries. You barely have enough staff or resources, and the hospital policies are not designed for a situation this dire.


“When people respond to emergencies, many decisions they face are quite predictable. They’re trained on them, and there’s policy,” says Neil Shortland, associate professor in the School of Criminology and Justice Studies. “But every now and then, they get stuck with a really tough decision that they’ve never trained for and never experienced, and they don’t have any guidance as to what the right thing to do is. Although these decisions are rare, they occur in the most extreme situations with the highest stakes.”


Shortland and an interdisciplinary team of UMass Lowell researchers are looking into using artificial intelligence (AI) to make those difficult decisions.


Human judgment is fallible. Even if someone is highly qualified to make a decision, their judgment can be skewed by biases, hunger, tiredness, stress and other factors, Shortland says.


“AI eliminates those issues,” he says. “It can be the best version of a person each time.”


To study the best human attributes for different decision-making scenarios, the researchers will expose people to emergency situations using a computer research tool developed by Shortland called the Least-worst Uncertain Choice Inventory For Emergency Responses (LUCIFER). They will then measure how a person’s psychological traits and values impact their decisions.


19 views

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page