top of page
Writer's pictureIvan Ruzic, Ph.D.

How Generative AI Will Profoundly Reshape Business

Updated: Jan 27


Artificial intelligence (AI) technologies have advanced tremendously over the past decade. A new subclass of AI called generative AI has emerged possessing unprecedented capabilities to synthesize novel content, insights and creations matching or surpassing human levels.

This article explores the current state and trajectory of generative AI, providing some guidance for business leaders on harnessing it while addressing potential risks.

What is Generative AI?

Generative AI refers to a broad family of extremely flexible machine learning models that generate new outputs like text, code, images and videos from text prompts, documents, or datasets. This differs drastically from previous more focused (“narrow”) AI systems designed for specialized tasks like facial recognition or translation.


Leading examples include natural language models like ChatGPT, GPT-4, Claude 2 and PaLM 2 for text generation, Stable Diffusion for image creation, GitHub Copilot and numerous others for computer code generation. What makes these models generative is their ability to produce high-quality, human-like results across an incredible diversity of domains by learning patterns from vast datasets.


Unlike narrower AI, these generative models are general-purpose "Foundation Models" that can be fine-tuned and adapted as needed by providing additional training data. Their broad knowledge comes from digesting massive corpora of text and images, allowing them to complete tasks, generate content and provide recommendations across nearly any domain.


Two Years Old: And Already Many Use Cases

 

Two Years Old: And Already Many Use Cases

The Staggering Pace of Generative AI Advancement

The pace of advancement in generative AI over the past few years has been staggeringly rapid, fueled by simultaneous breakthroughs in computing power, availability of massive datasets, and algorithmic innovations.


Leading models already surpass average untrained humans in capabilities like logical reasoning, content generation and identifying patterns in data. Highly tuned expert systems like Anthropic’s Claude 2 or Open AI’s GPT-4 can display levels of truthfulness, judgment and discernment approaching skilled professionals by training on carefully curated data.


Nevertheless, as we’ll see later, risks and challenges remain around potential biases, errors, and unintended consequences. Large models can sometimes provide seemingly smart or creative outputs without explanations for the underlying reasoning, which can be dangerous in scenarios like healthcare treatment recommendations. Testing often reveals unpredictable gaps in reliability, forcing continued monitoring.


Because the AI community is aware of these limitations, methods to audit problematic training data, enhance model safety and reduce harmful biases are progressing rapidly. And while current models have limitations, they already create tremendous value.

 

GPT-4 versus Human Test: Summary (September 2023)[1]



 

The Benefits and Risks of Generative AI

McKinsey Research[2] investigated 63 Generative AI use cases across 16 functional areas and 850 occupations. They estimated that widespread adoption of Generative AI would add an additional $2.5-$4.5 trillion to the global economy through cost reductions and increases in worker productivity.


Seventy-five percent of this value is expected to be realized in just four areas: marketing and sales, software engineering, customer operations and R&D. Clearly, these models provide numerous benefits to businesses and knowledge workers.


Their ability to generate written content, creative designs, computer code, analytical insights and recommendations has the potential to drive major gains in productivity, potentially increasing output by over 40% in those areas.


They can also make knowledge workers' lives easier by acting as responsive assistants that provide helpful information on demand. Marketers can quickly develop optimized content. Designers can iterate on visual concepts rapidly. Scientists can efficiently formulate and test hypotheses. Strategists can model scenarios and analyze competitors.


However, as highlighted by a Goldman Sachs study, deploying generative AI at scale could potentially displace hundreds of millions of jobs worldwide over the next decade by automating around 25% of human work activities. This has major implications for business leaders in terms of planning their future talent strategies, organizational structures, and workforce policies.


Potential Dollar Impact of 63 Generative AI Use Case across 16 Functional Areas and 850 Occupations[1]   



Training and using generative AI models requires large amounts of data and computing power, both of which are increasingly expensive and out-of-reach for many organizations, and even many nation states. So, it’s no surprise that the adoption and deployment of artificial intelligence in enterprises globally and in selected countries is highly uneven. Uneven access to generative AI between companies and nations also raises concerns around concentration of power and inequality.


According to Statista,[3] as of 2022, the leaderboard consists mostly of developed countries with two exceptions, China, and India. China had the highest rate of exploring and deploying artificial intelligence globally, followed closely by India. Both see AI as a national priority and have resourced it heavily. On the other hand, Latin American countries appear on the list in aggregate only, and ranked 11th.


As this is 2022 data, the jury is still out on generative AI adoption, but given the barriers to entry it will likely follow a similar pattern.


Data, Compute and Cost to develop Generative IA Models can be Prohibitive[4]



 To ensure the totality of AI benefits, it’s clear that both governments and business leaders must work together to explore better ways to distribute benefits broadly across a wider range of stakeholders.

Democratization Through Open-Source Generative AI

The open-source AI community acts as an important counterbalance to commercial generative AI providers by offering freely available models and code like Meta-developed LLaMA 2 and Stability AI’s Stable Diffusion. These open frameworks help lower barriers for startups and small companies to experiment with, although risks around potential misuse and abuse remain without proper oversight.


One welcome phenomenon to have emerged out of the open-source AI movement is Hugging Face. This company, founded in 2016 and now valued at $4.5 billion, has fast become the go-to place for open-source AI, with some referring to it as the open-source alternative to ChatGPT and the "GitHub of AI." Hugging Face has a library with a mind-boggling 120,000 pre-trained models and 20,000 datasets,[5] and over 10,000 companies use Hugging Face's platform for machine learning and AI development. New groundbreaking applications of generative AI now emerge nearly weekly, if not daily.


By democratizing access, open-source generative models have the potential to empower creativity and innovation across society. But thoughtfully managing this proliferation also grows in importance to mitigate dangers from irresponsible deployment.

Finding ways to expand access while cultivating responsibility remains an important challenge.


Challenges and Considerations for Responsible Implementation

In addition to its numerous benefits, generative AI also creates significant risks around misinformation, security, personal privacy, and reputation. Fake media generated with low effort can cause tremendous harm by rapidly spreading misinformation. Stolen proprietary data can be abused to train models that steal IP or defame competitors. Poorly designed systems can make unsafe recommendations in sensitive domains like healthcare.


Mitigating these abuses will require diligence, while finding ways to maximize benefits.

This is further complicated by technological risk as models become more advanced.  It is generally accepted that at some point these models will achieve superintelligence. That is, they will surpass human intelligence in all domains, raising concerns as to whether humans will be able to control these  entities.[6]  


Responsibly leveraging generative AI requires careful planning, care, and continuous monitoring.


Models More Advanced than GPT-4 May Pose Unique Risks[7]

 



Implications of Generative AI for Business Leaders

Generative AI has disruptive implications that business leaders and their teams need to unravel and address across technology, process, organization, and culture.


At the most basic level, generative AI can automate tasks that are repetitive, routine or codifiable. Content writing, data processing, customer service interactions, network monitoring, fraud detection, warehouse picking and even some accounting tasks are prime examples where substantial value could be created through automation.


However, generative AI’s exponential improvements means continually reevaluating activities based on the latest capabilities. Any rules-based work susceptible to digital encoding is likely to eventually be automated as progress continues. This includes tasks involving complex reasoning that previously required advanced education and specialized training. Leaders need to constantly keep re-assessing processes in light of new developments.


Generative AI also creates opportunities to substantially augment human strengths. Rather than replacing people outright, it can provide helpful inputs to enhance creativity, decision making, design and strategy. Realizing this potential requires an experimental mindset to complement intuition with data and willingness to iterate.


Leaders must also give thoughtful attention to how generative AI could widen inequality if deployed irresponsibly. They need to ensure solutions don’t bake in harmful biases or unnecessarily replace jobs providing livelihoods. In certain cases, ethical considerations may dictate forgoing efficiency for equity.


On the other hand, improving accessibility to generative AI’s benefits across income levels and geographies is also important for broad-based prosperity. Leaders should explore creative approaches for distributing capabilities much more widely.


Lastly, generative AI shifts data value. Companies sitting on underutilized data have new monetization opportunities. Those lacking proprietary data may find themselves at a disadvantage and need to explore partnerships or alternate data strategies.

In summary, while offering enormous opportunities, generative AI requires making deliberate choices balancing benefits and risks across dimensions like innovation, ethics, security, reputation, business models, workforce strategy and culture.


Those able to adapt quickly while anchored to their values will be best positioned to capitalize most fully.

Preparing for the Generative AI Era and Key Strategy Questions

As generative AI continues to evolve, it's essential for leadership teams to prepare their organizations to harness their full potential responsibly.


Initially, it's crucial to identify significant problem areas or opportunities within the organization. This targeted approach ensures that AI solutions focus on areas with the highest potential for value creation and feasibility. Avoid spreading efforts thinly across numerous small pilots and prioritize transformative applications.


Investment in high-quality, representative training data is non-negotiable. Data quality and diversity directly influence the success of AI models. We must all be vigilant in avoiding biases in this data, which could result in discriminatory or incorrect outputs. Regular audits for biases and sensitivities should be conducted, ensuring that the data reflects the business environment accurately.


Modernizing the technical infrastructure is vital for the flexibility and efficiency of data management and AI deployment. Aspects like cloud migration, containerization, and scalable compute power become paramount. Moreover, leaders should rethink processes with an AI-first mindset, focusing on augmenting human capabilities with AI, rather than mere task automation. This approach values human creativity, strategy, and design, ensuring that AI complements human roles rather than replacing them.


Risk management is another critical area. Ethical guidelines, rigorous security, and privacy measures should be established and continuously refined. Leaders must be proactive in monitoring and mitigating risks around security, ethics, and bias. Establishing mechanisms for ongoing auditing and human oversight, especially in high-risk scenarios, is essential.


To build trust and ensure the responsible deployment of AI solutions, starting with small-scale pilot tests is advisable. Monitoring their impacts and gathering feedback from those affected will help address any unintended consequences early on.


Finally, education will continue to play a pivotal role. Building internal skills and understanding of generative AI, focusing on its responsible use, is imperative. Data scientists skilled in machine learning (ML) and MLOps will be continue to invaluable as will many new yet-to-be-defined job roles this technology will create. Moreover, fostering a broad understanding, vigilance, and responsibility around AI throughout the organization's culture is vital. Ethical training should not be an afterthought of this educational initiative.


In summary, leaders should continually ask and address several key questions to guide their AI strategy:

  • Which major problem areas or opportunities should our AI solutions focus on?

  • How can we ensure the quality and representativeness of our training data?

  • What technical enhancements are needed for efficient AI deployment?

  • How do we balance human roles with AI capabilities?

  • What measures are we taking to manage risks, especially around security, ethics, and bias?

  • At what point do we trust AI with specific decisions?

  • How are we promoting ethical AI practices within our organization?

Companies that invest in these areas and prioritize responsible AI innovation will find themselves at the forefront as generative AI evolves.

Importance of Scenario Planning for Alternative Futures

Given the uncertainties surrounding generative AI’s trajectory, leaders should consider scenario planning to envision plausible alternative futures and help them evaluate strategic options. It has been used for several decades to identify future risks to organizations and allows leadership teams to design flexible long-term plans.


Scenario planning works best for foreseen risks and stable uncertainties. For example, what would each of the following scenarios mean for how AI could, or should, be implemented by organizations:


Scenario 1 - Responsible AI: Global coordination efforts succeed in proactively regulating AI systems for security, accountability, transparency, and ethics. In this scenario, use cases and capabilities would steadily grow, human agency would be centered, and innovation would remain tied to social benefits.


Scenario 2 - AI Cold War: Divergent approaches between regions restricting AI research and applications create a bifurcated landscape. In this scenario, nations and corporations would race for dominance in key capability milestones, with ethics inconsistently prioritized.


Scenario 3 - Unchecked AI Proliferation: In the absence of unified global protocols, rapid democratization leads to broad accessibility, but with minimal oversight. In this scenario, malicious use cases would grow with powerful actors exerting outsized influence on development directions.


These are just some examples. By gaming out strategies and contingencies under each scenario, leadership teams can identify flexible plans and strategic hedges attuned to their risk appetite and values. This also helps organizations become more adaptable to turbulence in the external landscape.


Scenario planning is a critical tool for navigating deep uncertainty that techniques like predictive modeling cannot adequately address.



Realizing the Promise of Generative AI Responsibly

Generative AI represents one of the most disruptive technologies ever developed, poised to transform nearly every aspect of life and business in the years ahead. Its accelerating pace of progress makes tracking developments and evaluating opportunities essential.


Those able to harness its potential while proactively addressing risks will gain significant strategic advantage. Whether a nation state or company, realizing the full promise of generative AI in a responsible manner will require vigilance, courage, and wisdom in the face of uncertainty.

For all its disruptive potential, generative AI remains a relatively narrow tool lacking common sense, emotional intelligence, and a guiding moral framework. Thus far.


But General Artificial Intelligence is just around the corner, and with it, superintelligence. To build systems imbued with wisdom, judgment, and humanistic sensibilities will require grappling with some of our most profound and enduring questions.


Realizing the full promise of generative AI - while avoiding its pitfalls - calls for continuous collective reflection on how we shape technological progress for the benefit of all.

Leaders have an essential role to play. Those with clarity of purpose, courage of conviction and depth of compassion will be best positioned to guide their organizations and societies through the transformative landscape ahead. The choices business leaders and their teams make today will shape whether this powerful new capability lifts humanity to new heights or undermines human agency.


By grounding decisions in ethical considerations and human impacts, leaders can help steer this transformative force toward the future. 

 


All illustrations generated by DALL-E 3


Sources:


[2] McKinsey, June 2023, “The economic potential of generative AI”

[3] Statista: Rate of adoption and deployment of artificial intelligence (AI) in enterprise globally and in selected countries in 2022, June 2023.

[4] Stanford University Artificial Intelligence Index Reports (2022 & 2023).

[6] Superintelligent AI May Be Impossible to Control; That's the Good News: https://spectrum.ieee.org/super-artificialintelligence

[7] DeepMind, May 2023, “Model evaluation for extreme risks”


16 views0 comments

Comments


bottom of page