Are the Machines Coming for Us? Stanford 2023 AI Report Reveals All

The sixth annual AI Index Report from Stanford University contains some fascinating insights. It’s a long read, so here are some of the highlights from Contextual Solutions.

Yellow AI Post-it

Stanford University has been publishing annual reports on the current state of AI for the past five years. This year's report (Artificial Intelligence Index Report 2023) is the sixth. Published in early April, it's the most in-depth and comprehensive installment so far. It's also likely to be the most widely read, given the rapid growth in public awareness of generative AI systems such as ChatGPT, MidJourney, and DALL-E since late 2022.

AI development is moving fast; almost exponentially fast. Attempting to pin down its current state is a little like shooting an arrow at a small target attached to the side of a train that's rapidly moving away from you... whilst blindfolded. But there's a vast amount of original research in the Stanford Artificial Intelligence report, plus input from many renowned individuals and institutions. It's an authoritative read and doesn't seem outdated.

Is AI humanity’s inevitable future or merely a plausible liar?

So, are we galloping closer to Ray Kurzweil's Singularity, the merging of human and machine minds which that famous futurologist has long predicted would occur by 2045, or is AI still largely hype over substance, with plausible-seeming output that's actually mostly nonsense? Supporting the latter case, the report quotes @GaryMarcus: "AI systems [...] are sociopathic liars with utter indifference to truth, deepfakers with words, every day creating more compelling, more plausible misinformation on demand." Gary isn't alone in this view.

Yet, there have been genuine advances in the past year. The report describes rapid AI improvements in the areas of natural language inference, machine translation, speech recognition and sentiment analysis. Much of this has been achieved through a significant increase in the number of parameters being used: in the hundreds of billions by the end of 2022, representing a vast amount of computing power.

In fact, performance improvements have been so dramatic that benchmarks have been saturated; new ones are being introduced to handle the power and subtleties of newer AI systems.

Plus points include the use of AI in scientific research, such as fusion reactor plasma control and antibody design. The environmental impacts of massive AI systems in terms of power consumption may be partially offset by their ability to design more efficient GPU chips and manage data-center cooling systems more effectively.

Industry powers AI growth, but legal constraints increase, and human fears remain

The public jury is still out on whether AI is a good thing or a bad thing. Americans are the most pessimistic, with only 35% believing that the benefits outweigh the disadvantages. For China, it's the other way around, with a 78% positive rate. Men tend to be more optimistic about AI than women and less fearful of what it might mean for humanity. Trust in self-driving cars is generally low; fear of loss of jobs, surveillance, loss of privacy, and hacking are all cited as concerns, as is the potential future lack of human connection.

Perceptions aside, AI is making inroads into the economies of many developed nations, the report showing that industry rather than academia is now the main driver of AI advancement. This is having a knock-on effect on the demand for AI-related professional skills, particularly in America, and also trickling down into education, although at a slower rate.

There are also chapters on the legal and political aspects of AI globally, and it's here that the report lags a little behind current reality. It does cover the dramatic increase in the number of legal cases centering on disagreements about the use of AI and notes that by the end of 2022, a number of countries had introduced laws specifically governing the use of AI systems.

However, the report doesn't mention the Italian government's blocking of ChatGPT due to privacy issues, a stance which appears justified: some users were apparently being shown other users' private data, and the training regimen may have contravened GDPR laws. Nor does the report cover the open letter from the Future of Life Institute calling for a moratorium on AI research, signed by many respected experts.

AI bias is a hard problem to solve

Bias in AI is an interesting topic because bias is both subjective and relative: it reflects the culture in which it is defined. The Stanford report devotes two entire chapters to ethics and diversity, covering bias and fairness within both. As might be expected, the current state of AI is shown to be failing dismally on most counts here since a given AI's output is a reflection of its training data.

The report cites a study showing that equality is also important to Chinese researchers, yet other governments may not care so much about gender and diversity biases. Such contrasting expectations and cultural perspectives will continue to shape the AI systems developed in different parts of the world. The report - perhaps wisely - has little to say about such disparities, which may end up magnifying cultural differences around the planet. AI might turn out to be an amplifier of conflict, making communication more difficult rather than easier.

From code-writing assistance to fake presidents and more 

There are other interesting snippets of information in the report. For example, GitHub's Copilot AI assistant has been helping developers write code since June 2022, for the most part making coders more productive and less frustrated. Image generation is now so good that few people can distinguish a real human face from an AI-generated one. Multimodal reasoning systems (capable of handling more than one type of AI task) are making headway, albeit only at an improvement rate of a few percent per year. Google is using its PaLM language model to improve the reasoning of... its PaLM language model. This is an "AI teaching itself" story that has parallels in science fiction, mostly the dystopian kind. On that note, incidents of AI misuse are increasing in number. Notable examples in 2022 include a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and US prisons monitoring the phone calls of inmates.

This article can only scratch the surface of The Artificial Intelligence Index Report 2023. It's worth reading in full, even though it runs to 386 pages (with a lot of graphs and tables). Coincidentally, an Intel CPU with the denomination of 386 was in widespread use in desktop computers 30 years ago. Today's equivalent PCs are roughly 100,000 times more powerful than a 386. There's no guarantee that the next 30 years will bring the same advances in computing power, but if they do, even the wildest predictions of AI's future abilities may fall hopelessly short of the mark.

FAQ: Stanford AI report 2023

What is an AI index?

An AI index refers to a comprehensive measure or assessment of the development, progress, and impact of artificial intelligence (AI) technologies and applications. It typically involves tracking and analyzing various indicators and metrics related to AI research, adoption, investment, and societal implications. An AI index aims to provide insights into the growth of AI, its advancements, and its potential impact on various sectors and society as a whole. It can include factors such as research publications, funding levels, AI capabilities, job market trends, policy developments, and ethical considerations. AI indexes serve as valuable resources for policymakers, researchers, industry professionals, and the general public to understand and monitor the state of AI and its evolving landscape.

Is Stanford good for artificial intelligence?

Yes, Stanford University is widely recognized as one of the leading institutions for artificial intelligence (AI) research and education. Stanford has a strong reputation for its contributions to AI and machine learning, with renowned faculty members, cutting-edge research initiatives, and influential AI labs such as the Stanford Artificial Intelligence Laboratory (SAIL) and the Stanford Vision and Learning Lab.

Stanford's faculty and researchers have made significant contributions to the field, including advancements in areas like natural language processing, computer vision, robotics, and AI ethics. The university also offers various AI-related programs and courses, both at the undergraduate and graduate levels, allowing students to explore AI from different perspectives.

Additionally, Stanford's proximity to Silicon Valley provides students with access to a vibrant tech ecosystem and ample opportunities for collaboration and industry engagement. Overall, Stanford University is considered a prestigious and highly regarded institution for those interested in studying and pursuing a career in artificial intelligence.


Intrigued to learn more? You can access the Stanford Artificial Intelligence Index Report 2023 using this link.

Previous
Previous

Fintech Boom Continues but Needs Better Regulation – World Bank Report

Next
Next

The German Blockchain Map