Computer science isn’t a new discipline at the University of Georgia.
For decades, the degree has trained students for careers in software development, computer system support, or information technology departments that exist at virtually every large company in America.
Recent years, however, have shown just how expansive our technological advancements can be. From the growth in online social platforms and deployment of artificial intelligence in manufacturing to generative AI tools like ChatGPT and the computing power we carry in our pockets, computing has grown beyond the borders that once thought to hold it in.
To prepare for a world inexorably tied to these technological advances, UGA established a task force in 2022. Nationally, employment in STEM-related occupations was projected to grow by 8% through 2029. Within that group, computer science and engineering are among the highest forecasted growth.
The task force led to a hiring initiative in data science and artificial intelligence and culminated with the launch of the School of Computing on July 1, 2022, jointly administered by the Franklin College of Arts and Sciences and the College of Engineering.
Since Fall 2022, the school has added seven new tenure-track faculty. It now serves nearly 1,800 undergraduate students pursuing computer science and data science degrees, another 260 graduate students, and more than 200 students pursuing one of its three certificate programs.
In Summer 2023, Professor Gagan Agrawal joined the UGA ranks as the school’s first director.
“I came to UGA because it is both a strong research university and also an incredible opportunity to build for the future,” said Agrawal, who spent nearly two decades at Ohio State before becoming a professor and associate dean for research and graduate studies at Augusta University in Georgia. “We have strengths in cybersecurity and privacy, artificial intelligence and machine learning, data science and computer systems.
“But there is so much room to grow.”
The future of computing at UGA extends beyond the doors of Boyd Hall, where most of the school’s faculty currently reside. In the modern world, it’s a part of every discipline across campus. There are health applications in biomedical computing and bioinformatics, generative AI that involves the humanities and arts, policy that intersects with the School of Law.
And behind it all, there is the human-centered concern about developing and deploying systems that are safe, responsible, and equitable.
In all these things and more, UGA is expanding its influence, training future students in computer science and doing research that will impact society worldwide.
Our automated future
Prashant Doshi was an undergraduate student at the V.J. Technological Institute in Mumbai, India, before he really began to understand what the concept of artificial intelligence was all about.
At the time, just before the turn of the century, AI was a bit of an amorphous domain—pie-in-the-sky fantasies about what it might become without really understanding what it would take to get there.
“I was a computer science major, and we didn’t have AI as a topic of study,” said Doshi, who came to UGA as faculty in 2005. “I had to go to the library and check out a book just to see what this was even about.”
What he saw intrigued him.
He was fascinated by the idea that programmers could train algorithms to take on everyday tasks, challenging labor, or solve questions too complicated for human cognition. When he went to Drexel University for his master’s degree, he joined a project involving AI with applications in surgeries and never looked back.
Today, Doshi’s primary research addresses these challenges.
“We have been working on developing this area—how can automated agents act optimally in a context shared with others,” he said. “If you look at all of my papers, that is the underlying theme: multiagent systems and how agents should act within them.”
In one project, conducted jointly with the Department of Psychology, Doshi and his team investigated automated planning, modeling hurricane evacuation decisions to determine what factors encouraged the desired outcome from people in the path of deadly storms.
It happens without fail: An extremely strong hurricane is barreling toward the coastal United States, and government agencies issue an evacuation notice for the areas of concern. Some portion of the affected community—optimistically, maybe half—acknowledge the warning and head inland. Others, however, didn’t understand the danger or ignored it entirely.
Doshi’s team studied Hurricanes Harvey and Irma, both in 2017, to see what factors led people to their respective decisions. Using AI, they developed a model built on survey data and data gathered from posts on the social platforms Facebook and Twitter, now called X.
The model was able to uncover some crucial information that could inform how these evacuation notices are delivered in the future. For one, the call itself did not correlate with a high rate of response.
“It didn’t play much of a part in the decision at all,” Doshi said. “Many people will say they never even heard a call for it or falsely believed that they did not fall in the evacuation zones identified in the call. Maybe it’s an information access problem, information literacy, or maybe the clarity of the calls themselves.”
At the core of his work, though, has been this idea of decision making. Sometimes that includes AI models that help humans make their own decisions. In the case of another project, it’s the AI making the decision itself.
During the pandemic, labor problems became incredibly acute on many farms in Georgia and across the country. Farms once staffed by humans to bring produce from the field to stores for purchase were suddenly short-staffed. The global supply chain was severely impacted.
Doshi’s team is designing collaborative robots to mitigate some of these potential challenges.
Building stronger data
In 2016, Professor and School of Computing Associate Director Lakshmish Ramaswamy was working on an NSF-funded project involving cyanobacteria along coastal Georgia reefs. These potentially harmful bacteria appear in algae, often a red color, along coastal reefs and could get into inland water sources for animals.
“Cows may drink the water and get neurological disorders,” Ramaswamy said. “Then it may enter the food chain.”
He and his team of researchers were using social media, specifically Twitter, in addition to sensors and satellites to track what was happening on coasts and inland water bodies. They had a set of key words to track on the platform, including one—“red tide”—that referred to this type of coastal growth.
In the fall of that year, however, their social media data went haywire.
“We didn’t know what was happening,” he said. “Suddenly our algorithms were yielding strange results.”
It turns out another group was using the same term on social media. It was election time in the United States, and Republican voters were hoping for a “red wave” in the November elections.
“It was harmless and hilarious,” Ramaswamy recalled, “but that shows the importance of robust data collection processes.”
Ramaswamy and his team focus on this area of data collection and analytics. The field is a vital one that only grows in importance over time as AI and automated decision making, like Doshi’s, grows in prevalence.
One of Ramaswamy’s projects takes data and applies it to another critical field of research in recent years: climate change.
Working with Department of Geography Professors Deepak Mishra and Andy Grundstein on another NSF-funded project, Ramaswamy has used data collection to map urban heat distribution in Athens and Oconee County.
Urban heat is a phenomenon in which urban centers, both large and small, tend to be hotter than their rural counterparts, with often significant differences. Asphalt roads, concrete surfaces, and fewer trees can cause temperatures to rise in these locales as opposed to more green areas beyond their borders.
Ramaswamy and his team used wearable heat sensors and sensors attached to buses to collect highly localized data around the city to identify temperature variability on a block-by-block basis.
“We wanted to track this with very fine granularity,” he said.
The project served a handful of purposes. For one, it helped identify commonalities among areas of higher temperatures. For another, they could also learn the extent of the problem in vulnerable areas—schools, hospitals, or low-income housing facilities.
“These places have kids playing outside or competing in athletics, or maybe they lack air conditioning,” he said. “If I’m a professor sitting in my office, I’m not experiencing the same kind of risk a roofer or a person picking up garbage or working on a road might face.
“Climate change is a fact. The planet is getting warmer, and we want to understand who is being most affected by these changes.”
This data science project could inform policy within government or landscape architectural strategies that create more greenspaces or different surfaces to limit the temperature discrepancies.
Putting humans first
As data, automation, and all the other fascinating technological advances continue, though, one critical part of computing systems is often overlooked by society at large.
The human.
Algorithms are developed by humans and for humans, but the human themselves is too often an afterthought. This is a broad oversight that Agrawal said UGA can’t afford.
“We are deploying AI to applications of critical infrastructure, smart cars, and smart cities,” he said. “We have datasets training AI to make critical decisions, and it will only become a more ever-present part of our lives.
“Being responsible and fair is vitally important, and that involves multiple parties, policies, schools, and people.”
Assistant Professor Ari Schlesinger, a recent hire who started at UGA in Fall 2023, is focused on this area of human-centered computing.
“My ultimate research goal is harm reduction,” Schlesinger said. “How can we reduce harm and reduce discrimination caused by computer systems? Whether that’s the interface or the algorithm, the classroom or the workplace, how do we reduce harm and improve equity among our technological advancements.”
Consider this scenario: A company is making new hires to its department and, in the hopes of removing human bias from its recruitment practices, it implements an automated system to sift through résumés. When the hiring manager schedules interviews for the position based on the AI’s recommendation, they are shocked to find a consistent pattern among all “top” candidates: They are white males from exclusive educational backgrounds.
Instead of choosing the best candidates from a diverse background, the AI, which was trained on data biased toward historical hiring norms, limited its crop of candidates to one demographic.
“It’s hard to become what you cannot see. Even if human-centered computing isn’t the core of your research, when people see it represented here at UGA, it can become a part of what they do.”
—Ari Schlesinger, School of Computing Assistant Professor (Photo by Lillian Balance)
This isn’t just fantasy. It’s already happening in the real world.
Schlesinger is working on one project with Doshi and other researchers from the Department of Philosophy, Public Policy, Terry College of Business, and the School of Law to mitigate such a scenario. The project aims to develop institutional ethics for artificial intelligence. This will include suggestions for how to develop AI systems that support ethical use and how to decide if an automated “solution” is even worthwhile.
The project’s abstract includes a scenario like the one above and lists out several others: large language models may be used by lawyers to generate misleading or unethical arguments, wellness chatbots may provide problematic advice for patients, and generative AI raises new questions about trust and intellectual property.
“When it comes to human-centered computing, sometimes we’re just asking questions about the basics,” Schlesinger said. “Is something like gender recognition even a worthwhile endeavor for AI classification systems to employ? Why is this something we need to be able to recognize via someone’s voice or image? And, in 2024, what does gender even look like?
“Many folks developing AI aren’t necessarily thinking about these sociocultural concepts like gender as a construct. We need to clarify what we’re really searching for with these systems. That additional level of clarity can inform how we think about and design AI projects.”
Schlesinger sees this human-centered computing framework as a call to action, an opportunity for the School of Computing to be leaders on a campus full of diverse areas of study.
“It’s hard to become what you cannot see,” she said. “Even if human-centered computing isn’t the core of your research, when people see it represented here at UGA, it can become a part of what they do.”