The dangers as well as the benefits of AI are coming to the forefront of the news and public discussion. The most visible AI innovators are in the private sector on the West Coast – OpenAI, Microsoft, Google, and Meta – but, behind the scenes, researchers in academia are making some of the most important advances. Public-private partnerships can generate synergies between these groups. Moreover, people working in for-profit and nonprofit organizations face different incentives and often have different motivations. This tension was at the heart of the recent power struggle at OpenAI and the New York Times.1 A model of AI research pursued by academics in cooperation with the private sector is now at the center of a major project recently announced by New York Governor Kathy Hochul. I have recently been collaborating with two of the academics leading the program: Venu Gavindaraju and Jinjun Xiong, who are considered to be among the top AI researchers in the world. My involvement takes the form of the Quent Research Collaborative, an interdisciplinary team of faculty and students at the University of Buffalo (UB) to which I provide financial support. UB is my alma mater and is the hub of the multi-university consortium that Hochul is assembling. UB has been identified as the home for a state-of-the-art AI computing center that will facilitate innovation, responsible research, and economic development.  Governor Hochul is investing $400 million in a consortium, housed by UB, that has been brought together to envision and lead the future of AI in New York State.2

Venu is the Vice President for Research and Economic Development at UB and is founding director of the Center for Unified Biometrics and Sensors (CUBS) there. Xiong is director of the Institute for Artificial Intelligence and Data Science at UB, and was previously co-founder and co-director of the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), where the IBM Watson AI system was created. (Watson is the “brain” that famously defeated the otherwise undefeatable Jeopardy champion, Ken Jennings.)

AI, friend or foe?

About five years ago I spoke with UB’s president, Satish Tripathi, and I asked him whether AI (as it was understood at the time) is our friend or our foe. (This is a question I have asked all my guests for many years now.) He said – and I agree completely – that if it is integrated with human involvement it's probably our friend, but if humans are not involved it might be our foe. Because this comment raises more questions than it answers, I asked Venu and Jinjun for an update and some color on the topic in my Q Factor podcast.

AI, said Venu, is a tool like fire or electricity. Whether it is good or bad depends on the person using it and the application to which it is being put. There has to be a human in the loop, and the human must be morally sound and, as Jinjun said, seek to “use the technology to mitigate the negative impacts instead of amplifying them.” If we follow these guidelines and seek what Venu calls “human centered AI design,” AI can be our friend. Otherwise, we are vulnerable to the use of AI for ill instead of good.

Regulating AI

While it’s impossible to guarantee that everyone involved in AI development will be ethical, regulation can put up guardrails. These would “clearly define,” in Jinjun’s words, “what is allowable and what is not allowable and in fact punishable, with all the parties involved somehow being held responsible for their actions.” Note that he said “somehow” – the enforcement mechanisms, as well as the regulations themselves, are a work in progress.

Venu drew a parallel to residential building codes. Some very specific principles, such as the number of exits or the maximum steepness of stairways, incline the outcomes in the direction of safety. Some builders will cut corners to save on cost, and some residents will fall down even properly constructed stairs, but the regulators have done what is possible, within reason, to minimize that risk. Building safety, at least in developed countries, is pretty good.

But is “pretty good” good enough for AI, a tool with powers that are both unknown and rapidly expanding?

Wealth and poverty

As I mentioned earlier, Xiong was in the forefront of the research that enabled IBM's Watson AI to beat the then-reigning Jeopardy champion. Xiong said the experience had a powerful effect on him due to his prior experience volunteering in India. I asked him to explain that connection.

In 2010, Xiong said, he volunteered for IBM's Corporate Corps program, analogous to the Peace Corps. IBM sent him to India, the country of origin for many of the brightest and most successful people in his field. His prior belief, then, was to imagine that India was doing well. But he found that, living right next to the affluent and educated, many Indians “do not even have access to the basics – health, food, water, education.” He was unable to reconcile spending billions on cutting-edge technology with the need to provide those basics to some of the world’s poorest.

It's disturbing when you suspect that your contribution to society isn’t helping those who need you the most. He asked himself, “Am I using technology in a meaningful way or not?” Reflecting on that question, Xiong, who knew that AI is a general-purpose technology with applications ranging from agriculture to education to energy to health care, decided to focus on the use of Watson for education. Because education conveys the largest benefit to those who have the least access to it, Xiong was able to reconcile his AI research with his desire to help those in need.

This insight connects directly to investment research and management. There are many EdTech businesses that use AI, including the well-known Duolingo and Coursera. Moreover, the extensive and rapidly growing use of AI in agritech is already improving the food supply, including in places plagued by food insecurity, and businesses engaged in those activities are interesting investment propositions too. The same goes for AI in health care – for example, a sonogram machine made by Butterfly Networks and costing only about $1,000, one-fiftieth of the normal price, enables midwives in parts of Africa where there are no doctors to assess risky pregnancies by sending the sonogram through the internet to an AI that compares it to millions of other sonograms and tells the midwife and her patient much of what they need to know.

How machines learned to recognize faces and read handwriting

Venu also had a personal recollection, in his case the fruitfulness of the public-private partnership on the topic of computer vision. Through his employer at the time, he collaborated with Kodak engineers on machine recognition of faces. He got a PhD out of this, Kodak got some profits (although the company later went out of business), and society got face recognition.

Venu also contributed his knowledge of computer vision to the U.S. Postal Service’s conversion from human to machine reading of addresses on envelopes. He recalls that, several years after he developed the computer vision technology, his address-reading system was tested in Tampa. It went from “0% recognition of handwritten addresses to 14% the next morning, with very small error rates that would convert to hundreds of millions of dollars in savings [for the Postal Service in labor costs] in that same holiday season.”

Technological advances with no obvious immediate use, then, often turn out to have great impact in surprising places later on. The use of AI for computer vision is a big deal in my opinion and Venu is very proud of it.

Three questions

Over the past 5 years, I have always concluded these discussions with three questions (listen here). My first is: In what field do you think big data will have the biggest positive impact on the world in the next 10 years? It could be science, manufacturing, investing, medicine, climate change, you name it.

Venu said “health care,” a position with which most of my guests have agreed. Jinjun said “education.” Jinjun’s logic was that health care has too many legal and regulatory obstacles for big data to be used efficiently, while education doesn’t suffer from that handicap. The second question is, over the next 10 years, where do you see the biggest threats from big data and AI? What are some of the biggest risks? Jinjun suggested that the biggest risk is people’s inability to distinguish true information from false or manipulated information, a concern that has come to the forefront with the upcoming election in the United States. Venu said the biggest risk is the threat to democracy, but then took a different tack: “Computers hallucinate.” But he said that is a feature, not a bug. “For AI to hallucinate is a very powerful tool because, after all, what is human creativity but hallucinating? You take everything that you have in your brain cells and then extrapolate something and you’ve created something new.” It's hard to top that, but the third question goes back to the beginning of our conversation: AI, friend or foe?

Venu took the lead on this question. He believes (as do I) that there are more good people than bad, and that a tool with no inherent moral dimension will therefore be used more for good than for bad. But, he said, “we can't put our head in the sand. It's up to you and me to build the guardrails.”

So we must figure out how to do that, because it isn’t obvious. And, as investors, we must also figure out where the greatest opportunities for investment are, as well as where the pitfalls are that we can’t immediately see.

1 Some background information on the power struggle at OpenAI is at https://www.nytimes.com/2023/12/09/technology/openai-altman-inside-crisis.html. In a related matter, The New York Times sued OpenAI over copyright issues, alleging that the AI software allows users indirect access to material published in the Times through its ChatGPT service; this is discussed at https://www.nytimes.com/2024/02/27/technology/openai-new-york-times-lawsuit.html.

2 See https://www.buffalo.edu/news/releases/2024/01/hochul-empire-ai-ub.html for a description of this initiative.

***Certain of the statements contained on this website may be statements of future expectations and other forward-looking statements that are based on Quent Capital’s current views and assumptions and involve known and unknown risks and uncertainties that could cause actual results, performance or events to differ materially from those expressed or implied in such statements. All content is subject to change without notice. All statements made regarding companies or securities or other financial information on this site are strictly beliefs and points of view held by Quent Capital and are not endorsements by Quent Capital of any company or security or recommendations by Quent Capital to buy, sell or hold any security. The content presented does not constitute investment advice, should not be used as the basis for any investment decision, and does not purport to provide any legal, tax or accounting advice. Please remember that there are inherent risks involved with investing in the markets, and your investments may be worth more or less than your initial investment upon redemption. There is no guarantee that Quent Capital’s objectives will be achieved. Further, there is no assurance that any strategies, methods, sectors, or any investment programs herein were or will prove to be profitable, or that any investment recommendations or decisions we make in the future will be profitable for any investor or client. For full disclosures, please go to our Disclosure page.