Investigating the AI Revolution

John Hershey illustration

Artificial intelligence is transforming business and society at breathtaking speed. Freeman Business spoke with four business school faculty members whose research is shedding new light on the advantages, risks and most effective uses of AI. Illustration by John Hersey.

In a conversation with McKinsey’s Michael Chui last May, Wharton School of Business Professor Ethan Mollick shared some startling statistics about the impact of artificial intelligence on business.

“We’re seeing in early controlled experiments anywhere between 30% and 80% performance improvements for individual tasks ranging from coding to writing, marketing, and business materials,” Mollick said. “To give you some context, steam power when it was added to a factory in the early 1800s increased performance by 18 to 22%. This is numbers we’ve never seen before.”

It’s no exaggeration to say that artificial intelligence represents the biggest technological disruption to hit society since the birth of the internet, and its long-term impact may be even greater, harnessing the ability for organizations to produce better results at faster speeds for lower costs across a wide range of tasks.

With growing advances in generative AI — which goes beyond traditional rule-based AI systems to produce new, creative outputs — PwC economists estimate that AI could contribute up to $15.7 trillion to the global economy by 2030, a 14% increase in GDP.

That growth in productivity may come at a cost. Goldman Sachs economists estimate that 300 million full-time jobs worldwide could be exposed to automation — including roughly two-thirds of all U.S. occupations — but they note that that exposure will not necessarily lead to layoffs.

“Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI,” write economists Joseph Briggs and Devesh Kodnani.

An even more optimistic forecast comes from the World Economic Forum, which in 2020 projected that AI would create 97 million new jobs by 2025, far more than would be eliminated. Workers who can adapt their existing skills or learn to leverage AI through reskilling efforts will have abundant opportunities, WEF argues, but it cautions that “without proactive efforts, inequality is likely to be rampant, skills gaps widespread, and the very foundation of the social contract could be unstable.”

Adapting to a future that effectively leverages artificial intelligence’s staggering potential while mitigating its risks will require extensive study and coordinated efforts across public and private sectors in the coming years, and that’s something the faculty of Tulane’s A. B. Freeman School of Business is already working on.

Faculty members from across disciplines are currently conducting research investigating the impacts of AI on a wide range of business, civic and governmental activities. For this feature, Freeman Business spoke with four faculty members about recent research that sheds light on the advantages and risks of AI.


 

Yumei He is an assistant professor of management science. Her research focuses on utilizing cutting-edge information technology tools to enhance the market efficiency of digital platforms. This includes a focus on topics such as human-AI collaboration, the commercialization of generative AI, and the exploration of open-source Large Language Models (LLMs). Her work has been published in Information Systems Research and Journal of the Association for Information Systems. She serves as an external researcher for high-tech companies and startups. Her working paper entitled “The Role of AI Assistants in Livestream Selling: Evidence from a Randomized Field Experiment” explores the role of AI-powered “assistants” in livestream selling, a fast-growing retailing format in which sellers promote and sell products through livestreams.

Yumei He

FREEMAN BUSINESS: First of all, what is livestream selling?

YUMEI HE: Livestream selling, also known as livestream shopping, is an innovative approach to online retailing that integrates live video. In this model, streamers— often popular social media influencers — host live video streams showcasing and selling products. These streams offer a real-time, interactive shopping experience where thousands or even millions of viewers can watch and purchase products instantly. The unique aspect is its fusion of video commerce with interactive features. Streamers not only display and discuss products but also engage directly with their audience, responding to questions and comments during the live broadcast. With its many advantages, livestream selling is revolutionizing online shopping. This transformation stems largely from the real-time interaction between consumers and social media influencers, creating a more dynamic and engaging shopping experience. Many media outlets have heralded livestream selling as the future of online shopping, citing its growing popularity and innovative approach.

 

FREEMAN BUSINESS: What are some of the drawbacks to livestream selling?

YUMEI HE: One of the most significant challenges in livestream selling is balancing service capacity with the demands of viewers. Consumers engaging in livestreams expect quick and personalized responses to their product inquiries, which are crucial for their purchasing decisions. However, streamers are often in a challenging position as they must multitask, handling an onslaught of questions in real-time. This leaves them with limited time and capacity to provide individualized responses to each viewer’s query. This capacity-demand imbalance is a critical issue that livestream sellers are currently grappling with.

 

FREEMAN BUSINESS: That leads us to your study. Talk a little bit about how algorithm-based assistants can address that challenge for livestream sellers.

YUMEI HE: AI-powered assistants analyze viewer comments and questions during the livestream using Natural Language Processing (NLP). They then provide automated and personlized responses at scale to identify and address individual customer needs. For example, an AI assistant may recognize a question in the chat about a product’s sizing and instantly respond to the viewer with the relevant details. By responding to questions that the host misses, AI assistants alleviate the service capacity-demand tension streamers face managing audience interactions while presenting.

 

FREEMAN BUSINESS: How did you test the viability of AI-powered assistants?

YUMEI HE: We collected data from a randomized field experiment in collaboration with Taobao Live, a leading livestream selling platform in China. The experiment involved over 10 million users, divided into treatment and control groups. In accordance with non-disclosure agreements, we further narrowed down our focus to a randomly selected sample of over 130,000 customers. During a five-day period, the treatment group experienced interactions with an AI assistant in livestreams while the control group did not. We tracked and analyzed detailed clickstream data for both groups to evaluate if significant between-group differences existed in key metrics, including purchases, product returns and other shopping behaviors.

 

FREEMAN BUSINESS: What were your findings?

YUMEI HE: We found that AI-powered assistants are useful. The implementation of the AI assistant increased purchases by 2.61% and, more importantly, reduced product returns by 62.86%. Given that the typical product return rate in this market is 30% to 50%, the magnitude of this impact is quite substantial. In addition, the AI assistant impacted the onsumer’s purchase funnel. It increased the consumer’s time spent listening to presentations before clicking products, increased chances of final purchase after clicks, and lowered post-purchase regret and returns. Another interesting finding was that, with the AI assistant’s involvement, there was a noticeable decrease in emotional language and expressions of affection in viewer comments. This suggests that consumers’ decisions became more rational and less impulsive after interacting with the AI assistant.

FREEMAN BUSINESS: What implications do you think your study has for the future of livestream selling?

YUMEI HE: Given AI’s immense potential for enabling personalized interactions, we anticipate a rapid increase in the adoption of AI assistants in livestreaming and video commerce sectors. Our findings offer compelling evidence that the deployment of AI assistants can significantly enhance sales conversions and reduce expensive product returns. With the advent of disruptive innovations like Large Language Models, AI is becoming increasingly sophisticated and adept at engaging in meaningful conversations with consumers. I think retailers should definitely consider investing more in their AI assets, including AI streamer assistants — and potentially even AI streamers themselves.


 

Eugina Leung’s work explores the influence of technology on consumer judgment and how technology hinders identity-based consumption. An assistant professor of marketing, she also investigates the role of language and culture in consumer response. Her work has been published in the Journal of Marketing Research and the Journal of Consumer Psychology.

Eugina Leung

FREEMAN BUSINESS: Your paper “The Narrow-Taste Effect: When Consumers Display Narrow Tastes to Algorithmic Recommenders,” which is currently under review, investigates how users interact with services that use AI-powered algorithms to provide recommendations. What were your findings?

EUGINA LEUNG: My co-authors (Phyliss Gai and Anne Klesse) and I found that consumers tend to express narrower preferences when sharing them with an algorithmic recommender system compared to when they just list their own preferences. This is what we mean by the “narrow-taste effect.” Specifically, consumers omit interests that they don’t think are central to their identity, most likely to avoid the risk of the algorithm misclassifying their tastes if they share too diverse of preferences.

 

FREEMAN BUSINESS: Why is it important to streaming services for customers to provide diverse preferences?

EUGINA LEUNG: Providing more diverse preferences leads to more diverse recommendations.  Additionally, we found that the diversity of videos consumed also relates positively to indicators  of user engagement, including the number of videos watched and watch duration.

 

FREEMAN BUSINESS: And greater user engagement leads to greater revenue. How did you establish the narrow-taste effect?

EUGINA LEUNG: We conducted seven studies, and across each of them, we found that consumers tended  to express narrower preferences when they believed their choices were being used by an algorithm to  generate personalized rec- ommendations. This was in contrast to scenarios where they were either  listing preferences solely for themselves or when they thought a human curator would use their  choices. This held across different domains including news, wine and video preferences.

 

FREEMAN BUSINESS: Why do you think consumers express narrower preferences to algorithmic  recommenders?

EUGINA LEUNG: Consumers believe algorithms utilize restrictionist rules to classify them, so displaying tangential interests — such as “liking” a Disney movie when your tastes lean more toward action and suspense —  could lead to misclassification. By analyzing both the central and tangential interests that  consumers chose, we found that people omitted more tangential ones when told they were providing  input to an algorithm.

 

FREEMAN BUSINESS: How do narrow inputs from consumers impact the diversity of recommendations?

EUGINA LEUNG: In studies using real video streaming websites, consumers who displayed narrower tastes ended up watching less diverse videos overall. This finding held even when some random video  recommendations were mixed in. Thus, both algorithm computations and consumer inputs constrain  consumption diversity.

 

FREEMAN BUSINESS: So if algorithms are critical to a company’s business model and your research  shows that consumer interactions with the algorithms are flawed, what can companies do to improve  them?

EUGINA LEUNG: We tested three remedies: Telling consumers the algorithm can understand diverse tastes without losing accuracy, making the algorithm seem more human-like, and asking for preferences before consumers create personal accounts for recommendations. All three approaches  increased the preference diversity that consumers displayed, so we feel that each is good solution.

 

FREEMAN BUSINESS: What are some of the implications of this research?

EUGINA LEUNG: Our findings show that limited recommendation diversity doesn’t just stem from the  algorithms. Consumer interactions play a key role. Streaming services or online retailers need to recognize that displayed or volunteered preferences may not fully represent a consumer’s tastes. Easy design tweaks like humanizing algorithms — such as referring to “handpicked” recommendations or depicting the algorithm as a vivid figure  with a name — and eliciting diverse inputs can improve preference diversity. More diverse inputs promote more diverse consumption, and more diverse consumption benefits both the consumer and the  companies.

 

FREEMAN BUSINESS: In conclusion, what are the biggest takeaways from this research?

EUGINA LEUNG: To enhance recommendation diversity, we need to understand the interplay between  algorithms and consumer behavior. This paper introduces the narrow-taste effect to capture how  interacting with algorithmic systems fundamentally shapes the preferences consumers share, with  substantive impacts on consumption diversity. To counteract the effect, companies can leverage design features we suggest that result in more varied and ultimately more relevant and useful recommendations.


 

Yi-Jen “Ian” Ho is an associate professor of management science. He studies the impacts of emerging information technologies, and his current research focuses on location-based services and advertising, online platforms, and artificial intelligence. His research has appeared in leading business journals, including Information Systems Research and Production and Operations Management. His working paper “AI Enforcement: Examining the Impact of AI on Judicial Fairness and Public Safety,” co-authored with Wael Jabar and Yifan Zhang, received the 2023 INFORMS ISS Cluster Best Paper Award from the INFORMS Information Systems Society. The paper is currently in preparation for submission to Management Science.

Ian Ho portrait

FREEMAN BUSINESS: Talk a little bit about your paper “AI Enforcement: Examining the Impact of AI on Judicial Fairness and Public Safety.”

IAN HO: To better manage overwhelming prisoner populations, some state judicial systems are adopting artificial intelligence to analyze offenders’ recidivism risk and recommend alternatives to incarceration for low-risk offenders. My co-authors and I wanted to understand (1) how these algorithmic recommendations designed to predict offenders’ risk of recidivism impact judges’ sentencing decisions, (2) whether AI helps make the process fairer across demographic groups, and (3) whether it improves public safety through\ lower repeat offenses. For the study, we analyzed over 56,000 sentencing cases of drug, fraud and larceny offenses in Virginia between 2013 and 2022, a period in which judges had access to an AI tool
that provided them with a risk score assessing offenders’ recidivism risk as well as recommendations for alternative punishments for offenders deemed to be low risk. Using this data, we compared outcomes for similar offenders around the cutoff score for receiving AI recommendations, which enabled us to causally assess the impact of the AI advice.

 

FREEMAN BUSINESS: What were your findings?

IAN HO: When judges followed the AI recommendations, we found that offenders had a higher chance of receiving alternative punishments, a lower chance of incarceration and shorter prison sentences. Following the AI recommendations also led to lower recidivism rates compared to when judges did not follow the recommendations, proving the concept of AI in judicial sentencing.

 

FREEMAN BUSINESS: One of the primary objectives in using AI for sentencing, I’d imagine, was to reduce recidivism. Can you talk a little more about how much using AI recommendations reduced recidivism?

IAN HO: Recidivism was about 14% when both the AI tool and judges recommended alternative punishments but much higher — 25.71% to be specific — when the AI tool recommended an alternative punishment but the judge instead opted to incarcerate. We can conclude from that that AI helps identify low-risk offenders who can receive alternative punishments without threatening public safety. Overall, it appears clear that following AI recommendations reduces both incarceration and recidivism.

 

FREEMAN BUSINESS: You also observed some examples of judicial bias.

IAN HO: Yes, that’s correct. In the cases we analyzed, we found that judges tend to give lighter sentences to female offenders compared to similar male offenders. However, when judges incorporated the sentencing recommendations from the AI tool, that bias was reduced and sentences became fairer across genders. We also found that while judges gave similar sentences to white and black offenders overall, when the AI tool recommended alternative punishments, judges tended to favor white offenders over similar black offenders. The white offenders got more alternatives and less jail time. To prevent this, we think better AI system training and feedback loops between judges and policymakers could help. Also, we believe judges should be encouraged to pause and reconsider their sentences when they deviate from AI advice.

 

FREEMAN BUSINESS: What do you think are the most important practical implications of this study?

IAN HO: Our paper emphasizes four important takeaways: 1) AI tools can enhance judicial efficiency and safety when incorporated effectively; 2) judges should be aware of unconscious
biases that skew discretionary decisions; 3) the public should be aware of the benefits and risks of the increasing use of algorithmic systems in law; and 4) policymakers should continually evaluate AI accuracy and monitor for unfairness in application. Overall, while AI does appear to bolster certain aspects of justice administration, there continue to be instances where human biases intercede and contradict data-driven risk analysis. As a result, we think that ongoing legal and ethical oversight regarding AI is essential.


Angelo DeNisi is the Albert Harry Cohen Chair of Business Administration and a professor of management, and his research focuses performance appraisal, expatriate management, and work experiences of persons with disabilities. He has served as the president of both the Academy of Management and the Society for Industrial and Organizational Psychology (SIOP), and he has been honored with lifetime achievement awards from SIOP and from the HR Division of the Academy of Management. With co-author Arup Varma, DeNisi contributed “ChatGPT
Talent Management and Advising Managers on Performance Management” to the invited review “HRM in the Age of Generative AI: Perspectives and Research Direction on ChatGPT,” which appeared in Human Resource Management Journal.

Angelo DeNisi portrait
FREEMAN BUSINESS: How can AI be used to improve performance management and appraisal?

ANGELO DENISI: Any discussion of the role of AI in performance management and performance appraisal needs to begin with a definition of performance management and why it’s so important. Performance management is not just the evaluation of an employee’s past performance — that’s performance appraisal. Instead, performance management encompasses all the activities organizations engage in to improve the performance of individual employees and, ultimately, improve the overall performance of the organization. Performance appraisal is a part of every performance management system, but performance management also includes various motivational interventions, such as incentive compensation and feedback. Also, while there have been many critiques of traditional performance appraisal systems, it’s difficult to argue that firms should not engage in performance management. The only question is what the best performance management system should look like and whether the goal is to improve firm-level as well as
individual-level performance.

 

FREEMAN BUSINESS: Given that distinction, what role can AI play in performance management and what role can it play in performance appraisal?

ANGELO DENISI: AI’s potential contributions to performance appraisal are probably more obvious. Formal appraisals are infrequent events in organizations, usually taking place once a year, but employees perform their jobs every day, so even if a supervisor could observe an employee all day every day, it would be extremely difficult for them to store that performance information in memory in such a way that they could later recall it when the actual appraisal is due. There has been considerable research on the importance of memory processes in performance appraisal decision making, so I think AI could play an important role in augmenting that process, storing and retrieving the performance information that a rater might observe over time. AI algorithms could be used to collect information on a wide variety of job components, store it in a meaningful way and then access it when needed. AI systems could also help the rater to combine these
multiple pieces of information to provide ratings. AI systems can also use this accumulated information to compose performance reviews. Recently, a company called Confirm began using ChatGPT to write performance reviews by asking colleagues of the employee being reviewed to answer questions. Reports are that the company is quite happy with the process but that they’re reluctant to use AI to actually provide the feedback to employees.

 

FREEMAN BUSINESS: Why is that?

ANGELO DENISI: It’s not clear how well employees would accept actual feedback about their performance from an algorithm. Would they see a loss of voice if they could not present mitigating information or present a different view of what happened? Could these AI system be developed to actually “listen” to employees and modify their feedback? Perhaps, but it doesn’t seem as though we are there yet, and we still don’t understand how empathetic AI feedback would be or how well it would be accepted.

 

FREEMAN BUSINESS: Are there any other potential benefits to using AI for performance management?

ANGELO DENISI: Because of the ability of AI system to process large amounts of data, these systems may enable firms to better understand the links between individual performance and firm-level performance. Up until now, those links have not been clearly demonstrated, and how to leverage improvements in individual performance to improvements in firm-level performance has remained a problem except for very simple cases. If AI systems could actually help uncover the exact nature of those links, it would make a huge contribution to the field of performance management.

 

FREEMAN BUSINESS: What are the biggest downsides to using AI for performance appraisal and management?

ANGELO DENISI: First and foremost, someone must actually “teach” the system how to accumulate and use performance information. Any bias in how the machine is taught to gather or process information will affect any outcomes that are generated by the system and can result in biased feedback as well as biased appraisals, leading to faulty performance management interventions. The potential problem is more serious in that it may be more difficult to demonstrate bias when decisions are made by machines instead of humans. Nonetheless, bias that is the result of statistical or computational processes — that is, bias resulting in how information is combined to make decisions — can be eliminated if AI algorithms are developed properly. But many scholars argue that bias occurs at multiple levels that go beyond computational bias. Human bias occurs when the individuals perceive information in a biased way. If raters collect and provide performance information to AI systems, this type of bias would not be eliminated. Finally, there is a systemic bias that results in the way work, jobs and society are structured. This may result in different opportunities for different employees, and relying upon AI systems will only serve to institutionalize this type of bias.

 

FREEMAN BUSINESS: In closing, what’s the future of AI in performance management?

ANGELO DENISI: AI can be used as a tool to help raters carry out appraisals and performance management interventions, but as of right now we still need to rely on imperfect human judgment and imperfect human communication skills to help organizations manage the performance of their employees. Of course, a day may well come when performance information is collected, processed and used by AI systems to make HR decisions. Would that be better than relying on imperfect humans? Did you ever read 1984?

 

 

Previous post:

Next post: