With Jeroen Rombouts
On April 10, 2024, Mistral AI launched Mixtral 8x22B, a mixture of expert models designed to push the boundaries of AI technologies such as advanced natural language processing. This release marks a significant milestone in the ongoing AI boom, with new challengers emerging alongside established tech leaders like Google and Microsoft. These developments highlight the growing influence of AI across various sectors and their transformative potential. Understanding them and their wide applications is essential as machine learning becomes crucial to strategic decision-making and value creation. Future managers must grasp these tools' economic, ethical, social and environmental implications.
The ESSEC SPOC 'Introduction to AI for Business' aims to give the students the necessary reading framework for navigating an AI-driven business landscape. The course provides a technical and managerial foundation for the effective deployment of AI technologies. In both 2023 and 2024, ESSEC students took part in a 12-question survey focused on AI-related themes. The diverse student body offers a rich array of perspectives, captured through both rating scale questions and open-ended responses.
This article compares the responses from the 2024 cohort with those from 2023 to analyse changing perceptions amidst the AI boom. In the eyes of future managers, is AI a passing trend, or rather a solution to modern challenges? Professor Thomas Huber and Professor Jeroen Rombouts (both in the Information Systems, Decision Sciences & Statistics department at ESSEC) explore the students' opinion on the development of AI. While AI offers solutions to contemporary issues, it also poses inherent risks. Ultimately, this study reveals that AI is still perceived as a polarized sector, with major tech stakeholders concentrating most innovations.
AI as a vector of innovation
Health remains a top priority for ESSEC students,who increasingly view it as the most critical application of AI. In 2023, 40% of students identified healthcare as the most important area for AI, a figure that has risen to 55% in 2024. Students emphasize the importance of healthcare and also provide specific use cases for improving healthcare quality. For instance, some students highlight the potential of image recognition tools to expedite MRI and CT scan processing, while others envision virtual assistants aiding doctors in diagnostics. While some of these solutions already exist, others are expected to emerge in the coming years. The growing awareness among ESSEC students about healthcare issues aligns with ongoing AI advancements in the field. For example, on June 20th, 2024, the French company Doctolib announced several AI innovations in development, including a consultation assistant to reduce the administrative and clinical workload for healthcare professionals, a data coding tool to automatically enrich patient information using medical documents, and by 2025, a virtual phone assistant allowing patients to book appointments via phone just as they would through the app, thereby alleviating the burden on medical secretaries.
The widespread adoption of generative AI in 2023 is clearly reflected in students’ responses in 2024. 40% of students believe that AI tools will broadly automate repetitive tasks, particularly through large language models (LLMs). Additionally, students recognize AI’s potential to tackle the climate crisis by optimizing energy production and consumption, highlighting their awareness of AI’s broader societal impacts.
Students' views on the future of AI continue to show a high level of optimism and positive sentiment. Quantitatively, the average response to the question “On a scale from 1 (low) to 20 (high): how do you see the benefits related to AI?” remains steady at 16/20, the same as in 2023. Sentiment analysis of students' responses to the open-ended question "What do you think is the future of AI?"reveals overwhelmingly positive attitudes. These sentiments remain consistent between 2023 and 2024, indicating that AI is not perceived as a passing trend but as a technology poised to offer solutions to important business and societal challenges.
Risks associated with AI and the main roadblocks to its development
In 2024, students identified several roadblocks to the development of AI, many of which mirror the concerns raised by the 2023 cohort. Some obstacles stem from a lack of resources and capabilities such as insufficient computational power, data, and data quality. Others are related to the human-machine relationship, reflecting a persistent distrust of new AI technologies. Ethics is frequently mentioned as a major roadblock, with some students expressing concerns that certain AI use cases might be deliberately avoided due to widespread fears of potential negative societal impacts. Additionally, a lack of transparency of new AI tools is noted as a barrier, potentially discouraging users from adopting these technologies. Lastly, some students believe that this distrust will lead to the establishment of a strict legal framework, which could hinder the development of AI.
On the one hand, the EU AI Act, officially approved on June 14th, 2023, could be seen as the kind of strict regulation that some students fear might hinder AI development. By setting stringent guidelines and imposing obligations on AI providers and users, the act introduces a regulatory framework that could potentially slow down innovation. However, on the other hand, this landmark legislation is designed to address many of the roadblocks identified by students such as transparency, accountability, and ethical concerns. By creating standards for risk mitigation, the EU AI Act may ultimately pave the way for more responsible and sustainable AI development in Europe.
The dangers associated with AI identified by students in 2024 closely mirror those highlighted by the 2023 cohort, including a lack of human involvement in decision making, privacy concerns, deepfakes & fake news, biases & discrimination, and job replacement. Surprisingly, students' views regarding the most important applications of AI do not seem to influence their perceptions of its potential dangers at a societal level. Indeed, the students' responses are evenly distributed across various categories, as seen in the figure below. This indicates that the dangers identified by the students are not dependent on individual sensitivities regarding the role AI should play in the future, but rather on commonly shared concerns expressed across the entire group.
Figure 4: An illustration of the connection between topics derived from responses to the question “What do you consider the most important application of AI today?” (left) and responses to the question “What is for you the most severe danger associated with AI?” (right). Each data flow represents the number of students who expressed both a specific application of AI and a corresponding concern about its associated dangers.
AI is still perceived as a sector polarized among major international players
In 2023, the AI landscape was predominantly shaped by companies from the United States and China. The U.S. solidified its leadership through the efforts of tech giants like Google, Microsoft, Amazon, and Open AI. At the same time, China ramped up its efforts, with companies such as Baidu, Alibaba, and Tencent heavily investing in AI research, supported by government initiatives like the "Next Generation Artificial Intelligence Development Plan." This rivalry highlights the concentration of AI expertise and resources within these two nations, highlighting the growing polarization of the sector among major international players.
This polarization is clearly reflected in the students' responses, both in 2023 and 2024. However, in 2024, China appears to have gained a more prominent position, nearly on par with the United States, whereas in 2023, there was a notable gap between the two AI powerhouses. Some countries, like Japan, have seen a decline in prominence, with Japan being mentioned only half as often in 2024 compared to 2023. By contrast, France was mentioned more frequently in 2024, reflecting significant advances in AI by French companies, particularly Mistral AI, which secured a €105 million funding round in June 2024. The emergence of actors outside the US and China, also reflects a growing awareness of the need to avoid one-sided reliance and dependency on a small number of powerful AI providers. Mistral AI, for instance, claims to develop open-source LLMs for businesses, allowing them to use generative AI while maintaining control over their data and intellectual property.
The rise of AI starting in early 2023 has both disrupted the balance among tech giants and allowed some to reaffirm their leadership in AI. The first major shift was the emergence of OpenAI, which quickly established itself as a leader in generative AI. This shift is clearly reflected among ESSEC students, who rank OpenAI as the second global AI leader in 2024, despite it being a marginal player in 2023 (most responses to the 2023 survey were collected at the end of 2022, just before the widespread adoption of ChatGPT). The second notable shift is the downgrading of Amazon alongside the significant advancement of Microsoft. Microsoft has made significant strides in AI introducing several key customer-facing AI-based improvements to their services over the past few months. In October 2023, Microsoft rolled out enhancements to its Copilot, which build on OpenAI’s base technology and integrate AI-powered features into its Microsoft 365 offering. Additionally, Microsoft has been at the forefront of multimodal AI models, which can process and understand multiple types of data, such as text, images, and audio. These models have been integrated into various applications, including the Microsoft Designer app, which uses AI to generate images based on textual descriptions. Finally, the students have consistently ranked Google as a leading AI entity in both 2023 and 2024. This ranking is supported by Google's continuous innovations, such as the launch of Gemini in January 2024. Additionally, Google's substantial investment in AI research and development, highlighted by their AI advancements in healthcare and climate science, further solidifies their leadership.
Conclusion
The 2024 survey reveals a growing recognition among ESSEC students that AI is not merely a passing trend but as a critical solution to contemporary challenges. Healthcare stands out as a focal point, with advancements in image recognition and virtual assistants poised to revolutionize medical practices. While optimism about AI's benefits remains strong, this optimism is tempered by anticipated roadblocks and challenges. Students continue to voice concerns about AI's ethical implications and potential roadblocks. Issues like biases and lack of transparency underscore the need for responsible AI development and deployment. Moreover, the perception of AI as a domain dominated by a few major international players persists, with notable shifts that reflect evolving industry dynamics. The emergence of new players like Mistral AI may signal a trend towards a more balanced playing field in which companies from various regions compete to best balance the benefits of innovation with wider ethical and societal concerns.