Stevens Is All In on AI
100+ faculty are creating 'beneficial AI' to keep us safer and healthier, inform a greener future and weed out hate and bias
The development and application of artificial intelligence have exploded in the past decade, transforming our lives.
Everything from the phones we carry to the cars we drive already packs some form of AI, trained by millions of words, images and behaviors to pick out hidden patterns, make predictions in a split-second — and learn over time to make ever better ones.
Yet Americans are becoming worried about AI, according to a recent Stevens TechPulse survey that revealed sharply increased concerns during the past two years.
“Americans have become significantly more wary of AI, its capabilities and the considerable impact it has,” notes Brendan Englot, director of the Stevens Institute for Artificial Intelligence (SIAI), an interdisciplinary center drawing on more than 100 faculty from all of Stevens’ schools.
These SIAI-affiliated faculty are working to create AI that benefits society — systems that predict Alzheimer’s and heart disease, inspect hazardous cargo, keep roads and bridges safer — while also studying AI’s potential drawbacks and dangers.
“AI is powerful new technology, and we are working to harness that power for societal good,” says Englot.
Leveraging AI for better health
Stevens innovations enhance health data, informing both improved diagnosis and decision-making in medical settings.
Biomedical engineer Yu Gan designs algorithms that clarify medical images of our hearts to help physicians diagnose dangerous cardiac conditions earlier. Gan is also building systems that enable new insight into anti-cancer therapies on the cellular level.
Systems scientist Sang Won Bae develops AI-powered smartphone applications that can accurately spot depression from pupillary reflexes, head postures and facial gestures. Working with Stanford and the University of Pittsburgh, Bae also develops applications to predict substance use from smartphone data. Importantly, none of her systems view or read device contents.
Professor Ying Wang co-designed RAIDER, an AI system that predicts the likelihood of a patient being diagnosed with COVID or viral pneumonia with 97-98% accuracy by quickly scanning chest X-rays; the AI was first "trained" on images of healthy and sick patients.
Stevens biomedical engineering chair Jennifer Kang-Mieler and Gan collaborated on an NIH-sponsored effort to use AI in novels ways to generate data, train learning models and early-detect a specific ocular disorder in premature infants.
Computer scientist Samantha Kleinberg, whose research is robustly supported by NIH and NSF, develops sophisticated algorithmic systems to address a host of healthcare challenges including stroke care, nutrition and decision-making in medical settings.
analyze the motions of pitchers to reduce injury on young arms and improve coaching. Damiano Zanotto develops unique in-shoe sensors that monitor gait and movement to inform rehabilitation efforts.
With Major League Baseball’s support, Antonia Zaferiou and a 20-plus member biomedical engineering team of undergraduate and graduate studentsSIAI founding director K.P. Subbalakshmi develops a host of AI-powered technologies, including iterations that predict Alzheimer’s disease, dementia and loneliness with very high accuracy by analyzing people's use of language.
Computer scientist Yue Ning, mining CDC and other data, developed an AI that accurately predicts spreads of epidemic diseases such as the flu.
Keeping us safer, improving life, battling bias
Some Stevens researchers are designing AI to keep us physically safer.
Kaijian Liu, for example, develops algorithms to monitor bridges and predict, with very high accuracy, which may be in danger of failing in order to prioritize repairs. In early tests examining several thousand U.S. bridges, his AI accurately flagged dangerous issues 90% of the time — 15% to 20% more accurately than existing methods.
In Stevens’ Davidson Lab, one of the nation’s leading flood and storm-surge prediction and warning tools is powered by AI and supercomputing. The system correctly forecasted Hurricane Sandy’s record flooding in advance. Many dozens of metropolitan-area planning and resiliency officials consult it daily.
Undergraduate Samantha Weckesser ‘23 led a team that developed AI that rapidly assesses cargo containers in port, flagging those more likely to contain hazardous materials — important in an era when container-ship fires happen with alarming regularly. The system is now employed by the Coast Guard.
Other researchers work to improve quality of life and ensure a more sustainable future for the planet:
▪ Knut Stamnes and Maroune Temimi independently develop algorithms to sharpen satellite images of Earth, enabling improved scientific monitoring and interpretation of global warming, weather, climate change and food and water supplies.
▪ Energy experts Lei Wu and Philip Odonkor use AI to plan, monitor and optimize the power systems, power grids and smart buildings of the near future.
▪ Jonggi Hong develops a suite of AI-powered tools that better empower disabled technology users.
▪ Dean Kelland Thomas creates AI that composes original jazz by studying the masters.
▪ Business professor Jeff Nickerson co-develops AI that brainstorms novel ideas by scanning written materials.
SIAI director Englot himself develops AI to help autonomous crafts navigate unfamiliar environments more efficiently.
“This technology could be useful,” says Englot, “in development of inspection robots that go places we can’t, or don’t want to, go: high-voltage power lines, sewer lines, the undersides of bridges and ships.”
Stevens researchers also delve into important questions about how and why AI-powered systems may be biased in some of their predictions and decisions.
Ning, for instance, develops AI to detect virulent speech in conversational streams on social media, giving moderators a new tool to tamp down the viral spread of hate. Subbalakshmi’s teams design systems to spot “fake news” and develop tools that render AI systems more transparent, revealing the unseen processes behind their decisions.
Jia Xu takes a different tack, studying the AI-powered systems used to assist in mortgage lending.
“Our research has found that race and age are not useful features at all in determining whether someone is a credit risk," explains Xu. "Yet this is part of the way most current AI decides whether you receive a line of credit or a home loan. We need to build something better, more accurate, more bias-free."
“As AI and machine learning race ahead, it’s critical to understand both the positive applications and the dangers,” sums Englot. “At Stevens, we’re helping lead the way forward on both fronts.”