Skip to content

Innovating for Impact with Generative AI

Generative AI is a foundational technology that is already having an effect in many organizations. In the early days, the technology focused on creating fake humans and automating image creation processes. However, the advent of ChatGPT demonstrated that LLMs often exceeded the capabilities of previous natural language processing approaches that were slowly being adopted in enterprises.

This enthusiasm was further buoyed by consumers’ massive uptake of ChatGPT, which has experienced the fastest adoption of any application to date. But the power of these LLMs isn’t limited to fancy chatbots that are better at sounding like real humans. The industry as a whole is starting to acknowledge that LLMs can also serve as universal translators across different domains to generate code, understand complex data sets and simplify user experiences across roles and for customers.

“Generative AI capabilities, such as prompt engineering and adaptive responses, are what set it apart from previous AI models we have seen,” said Bharath Thota, partner in the advanced analytics practice at global strategy and management consulting firm Kearney. “With ChatGPT being introduced to the general public, we have seen a shift in the way that people now understand and interact with AI.”

That said, the new technology comes with various challenges, such as a propensity to hallucinate, requiring guardrails and vetting by domain experts. It also introduces new ethical challenges around the use of responsible AI. Left unchecked, generative AI could also exacerbate existing issues with bias, privacy, security and public opinion.

Some of the ways Generative AI is changing the business landscape are –

A.   Self-Teaching Models

One of the attractions of LLMs is they can discover patterns on vast unlabeled data sets on their own, at least as a starting point. With this, it’s important to know how to specify the requirements for vetting and refining data or fine-tuning models for best results. Rex Chekal, principal product designer at software development consultancy TXI, expects innovations in smaller self-teaching models that compete with large data-hungry models, like GPT-4. One early example is Orca from Microsoft, which imitates the reasoning processes of larger models using progressive learning and teaching assistance to overcome capacity gaps. “For CIOs, using [LLMs] will be like hiring an all-star employee who continuously improves and is transparent about how they work,” Chekal said.

B.   Better Insights and Accessibility to Information

Innovations in LLMs make it easier to customize information and experiences for a wide range of employees. “Generative AI has enabled all employees — and not just technology people — to access and benefit with data and technology in new ways,” said Benjamin Rehberg, managing director, senior partner, and global lead of tech and digital offerings at Boston Consulting Group (BCG). As a result, using AI tools without code or little code is increasingly becoming the new reality.

C.   General and Specialized Models

The first wave of generative AI models were proficient for many tasks but had challenges in certain domains. As Generative AI evolves and expands into specific industries and fields, it is driving the development of models fine-tuned for particular purposes in many industries and domains.

D.   Applications Built on Generative AI

With the proliferation of apps built on top of LLMs or conditioned LLMs to solve specific needs, early examples include web navigation concierges and code development assistants. Early growth is starting in B2C contexts followed by B2B and business-to-enterprise applications.

E.    The Rise of Open Source LLMs

Initial LLMs from OpenAI and others were relatively proprietary. More open models, such as Meta AI’s Llama 2, provide viable alternatives that increase transparency, customization and cost-effectiveness. This trend should continue, according to Samuel Hamway, research analyst at Nucleus Research. For CIOs, this means more control over data and AI operations that requires increased expertise in model management, maintenance, governance, and hardware infrastructure.

F.    LLM Plugins to Extend Organization Capabilities

Generative AI vendors, like OpenAI, now support plugins to extend or augment the core capabilities of LLMs to become more task-specific. This task makes LLMs more capable for targeted tasks and useful across different markets with these plugin ecosystems also simplify the integration of AI into existing workflows and streamline the deployment of AI-based offerings.

G.   Improved Search

New enterprise search tools use LLMs to enhance access to relevant data. One key innovation is improvements in vector databases that stage data that has been transformed into an intermediate format more accessible to LLMs. Hamway said enterprise search initiatives will be required to consider data retrieval mechanisms as a core competency to make data more actionable and insights timelier as well as consider having a unified data architecture to improve integration with LLM-powered search capabilities.

Specific ways Generative AI is impacting the enterprise include –

1.    Upskilling. The rush to take advantage of Generative AI is increasing the pace of upskilling efforts. “AI upskilling paves the way for building an enterprise that fully understands how to use generative AI and the benefits the technology brings to the company,” said Scott Likens, global AI and innovation technology leader at global consultancy PwC. Because of this, he expects to see far more companies investing in training and upskilling for current employees in the future, especially in Generative AI usage.

2.    Hybrid architectures. Likens also observed that Generative AI is evolving rapidly. The increased pace of innovation in open source software and platforms from big tech companies means enterprises need to rethink how they test and scale these technologies. This also increases the use of hybrid architectures that are changing quicker than other digital transformations and since these new approaches shorten the time to value, it’s important to design flexibility into new offerings.

3.    Changing the nature of expertise. Generative AI is automating an increasing number of tasks to boost productivity. Everest Group’s Vijayan expects this to also change the nature of expertise. Enterprises will need to rethink their talent agenda, workforce planning, as well as learning and development programs. This is to recognize valuable human expertise is moving from mastering things such as programming language syntax to higher-level understanding, better insights and improved look ahead abilities. Because of this, Enterprises need to rebuild their skills and competencies as well as reassess how to attract and retain talent.

4.    Renewed focus on enterprise data strategy. The biggest gains of Generative AI require going beyond the low-hanging fruit of generating text, images, and other media. Vijayan envisions a future where Generative AI creates appropriate business or IT workflows, creates complex documents from scratch and generates specific or personalized marketing material. These new use cases will require seamless access to enterprise data, regardless of the approach. “The enterprise data journey is not new, but GenAI will require a renewed focus and perhaps more investments to get there quicker,” he said.

5.    Automation of job tasks reshaping jobs and the meaning of work. Prior waves of automation built on robotic process automation and business process automation focused on automating whole processes. Generative AI can accelerate automation and that Certain tasks will become fully automated, BCG’s Rehberg said, while more complex work will require a human element. “We recognize that all revolutionary developments have not led to less jobs, just changes in the effectiveness and efficiencies of employees. It would be a mistake to believe that there will be less jobs available moving forward, rather some jobs will be reshaped and evolve.”

6.    Generative AI embedded natively into existing enterprise apps.  Generative AI will transform from standalone assistants, like ChatGPT, to something natively weaved into productivity applications, such as email, spreadsheets, content authoring tools, presentation tools and other core enterprise systems, such as ERP, CRM, HR management and recruitment systems. Microsoft’s Copilot provides an early example of this across different applications.

7.    Competition, disruption and lowered barriers to entry. Generative AI is enabling more automation so organizations can do more with fewer resources, produce results faster and to reduce costs. “As costs come down, new business models become more feasible,” Vijayan said. This is similar to how digital banks quickly started taking business from brick-and-mortar financial institutions. Extending on this, disruption is expected in many industries and domains.

8.    The niche future of Generative AI. Enterprises are expected to see the best results in customizing LLMs for their own particular industry and use case. There won’t be one general tool, such as a chatbot, used across industries, predicted Mona Chadha, director of category management at AWS. Rather, each organization will develop a Generative AI offering that can deliver business outcomes for specific domains or markets. “These tools will include highly domain-specific LLMs for consumers to use with their own contextual data,” she said.

9.    Cost. While current Generative AI services are more expensive than traditional search and natural language processing (NLP) techniques, prices are dropping and are expected to go down further as infrastructure, hosting, training, and inference become more efficient and economies of scale are realized.

Here are some of the ways generative AI will shape various use cases within enterprises

A.   Research

Generative AI will have the greatest impact on jobs that focus on research, particularly those involving the largest sets of data, said Brian Spanswick, chief information security officer and head of IT at data security company Cohesity. This includes research relating to legal questions, scientific research, data governance and code development as well as increase the emphasis on higher levels of critical thinking in day-to-day work. LLMs will do the heavy research lifting more completely across larger data sets virtually instantly. “Rather than spending the majority of people’s time on busy work, the power of the employee will be in making good decisions based on the data they have, with the knowledge that that data is trustworthy,” he said.

B.   Cybersecurity

Generative AI will also streamline many aspects of cybersecurity. Spanswick said one of the biggest challenges in cybersecurity has been understanding where critical data sits, as well as getting insight into how that data moves through the company and the ecosystem as well as how that data enables core business processes. Generative AI will help security teams obtain, analyze, synthesize and act on this data. This makes it easier to understand their attack surface and the level of protection required, as well as assess the security posture of these attack surfaces. This better enables organizations to prioritize security investments, assess practical cyber-risk and establish the best KPIs for security controls and compliance requirements.

C.   Business intelligence

Generative AI is improving the value and effectiveness of business intelligence (BI).  Analytics vendors are exploring how they can take advantage of AI capabilities in their platforms by either integrating with existing Generative AI services or building their own. Generative AI will also streamline traditional BI workflows that now require close collaboration among developers, data scientists and business analysts. Further, LLMs will help produce the same content by people with minimal or no technical skills. Because of this, content on dashboards and in reports can be personalized for different people or groups.

D.   Advancement of AIOps

Enterprises are increasingly turning to AI to improve IT operations management, or AIOps. This is sometimes confused with MLOps, which focuses on enhancing machine learning development workflows. Generative AI has the ability to sift through vast quantities of data to take programmatic actions. An example of this is ingesting various forms of business data to identify and mitigate anomalous behavior faster and more precisely.

E.    Personalized Customer Experiences

Generative AI is also playing a role in increasing customer engagement through tailored or personalized interactions, website experiences, products, and services. Kearney’s Thota said generative AI could help businesses craft highly personalized content to boost sales through cross-selling and upselling opportunities as well as reduce churn. He recommends people work closely with marketing and customer relationship teams to gain insight into how Generative AI can be integrated with existing customer engagement platforms and strategies. This helps guide the processing of collecting and integrating the data needed to build the personalization engine and have the structure to scale the tool as it expands across the organization.

F.    Advanced Conversational AI

Generative AI techniques are leading to more sophisticated NLP (Natural Language Processing) models to better understand context and generate humanlike text. Thota believes this could transform various aspects of business operations, including customer support, multilingual support, conversational knowledge databases and virtual assistants across multiple functions. He recommends companies prepare by identifying areas where advanced conversational AI can contribute to customer and employee-facing interactions. With this, start thinking about governance and establishing user guidelines to prevent misuse of conversational AI models.

G.   Automated Content Creation

Marketing, communication and design teams are using AI-powered tools to streamline content creation processes. This accelerates campaign timelines, optimizes creative resource allocation and bolsters brand consistency, said Dr. Stefan Sigg, chief product officer at Software AG. With this, explore ways to integrate AI-powered tools into workflows to improve collaboration between AI and people. It’s also important to upskill creative teams to effectively collaborate with AI systems and scale AI infrastructure for increased content demands,  plus foster an organizational shift that embraces AI as a creative ally rather than replacing people or functions. An example of this is enterprises having their own EnterpriseGPT for customized use within the organization,” he said.

H.   Increased Efficiency of Operations

The expected wide adoption of Generative AI is expected to improve the efficiency of operations in many organizations and industries, said Shipra Sharma, head of AI and analytics at AI consultancy Bristlecone. Generative AI can work alongside humans to make their jobs easier, to save time and reduce costs. “That’s a tangible and important benefit as well as to make operations nimbler and better position the organization to adapt to change and opportunity.”

With the adoption of Generative AI to drive change and make good on opportunities, there are challenges that require consideration of the following –

  • Risk. Enterprises need to address primary risks around generative AI for broad-based adoption, Everest Group’s Vijayan said. Top issues include regulatory, intellectual property, data, privacy and explainability concerns. He expects enterprises and vendors to develop meaningful workarounds and mitigation mechanisms.
  • Responsible Use. PwC’s Likens said enterprises must consider security and performance risks with generative AI, such as bias and hallucinations. This makes it imperative to plan and execute responsible AI rollouts. With this, there is a need to invest in internal AI tools to ensure proprietary data and ideas remain within the enterprise. Adopting responsible AI frameworks and guardrails are also needed to encourage responsible and ethical use.
  • Mitigating Surprises with Shadow AI. Cloud and other innovations created a variety of low-cost and easy-to-access services that drove the growth of shadow IT (outside corporate IT). Abhishek Gupta, founder and principal researcher at Montreal AI Ethics Institute, expects a similar rise in shadow AI. This refers to the AI systems, solutions and services used or developed within an organization without explicit organizational approval or oversight. It can include anything from using unsanctioned software and apps to developing AI-based solutions in a skunkworks-like fashion. To mitigate potential issues with Shadow AI, Enterprises should raise literacy of AI systems and awareness of risks (ie: hallucinations, bias, leakage of private and confidential information, etc.).
  • Regulatory and Copyright Concerns. With Regulators in the early stages of determining the impact of AI on existing copyright and intellectual property frameworks, “Content created by a generative AI model resembles human-created work and thus could lead to regulatory and copyright issues,” Thota said. Because of this, Enterprises need plans for ethical usage of content and ensure models avoid plagiarism and other potential issues. As such there is a need to be proactive and collaborate with legal and other teams to develop internal policies and guidelines for using generative AI applications within the organization. It’s also important to track emerging trends in copyright laws and regulations within your industry.
  • Trustworthy AI.  Generative AI technology providers and governing bodies are working on making Gen AI more trustworthy. This includes more security technology, data protection guarantees and legal protections. Data labeling and tagging, along with digital watermarks also enhance trust and improve transparency into how the data is used and increases clarity on copyright, etc.
  • Rise of Confidential Computing. Training LLMs on vast data sets won’t do for many enterprise data sets involving personally identifiable information or confidential data. Sigg expects security and privacy concerns to prompt the evolution of generative AI toward techniques like federated learning and secure multiparty computation that protect sensitive data during the training process. “These advancements enable businesses to collaboratively train models on decentralized data, fostering industry partnerships, while maintaining data security and complying with regulations such as GDPR,” he said. This balance between data sharing and privacy protection opens doors to innovative collaborations previously hindered by privacy concerns.
  • Interpretive AI. Early Generative AI models made it difficult to discern how the technology was used across the organization at scale. AWS’ Chadha indicated that innovations in interpretive AI will help CIOs understand the value of their customer base and enable organizations to assess product impact as well as be good at market adoption and internal utilization.
  • Energy Use. Generative AI can dramatically impact the consumption of IT resources and, hence, the energy usage of an organization. Chadha said organizations should prioritize use cases that are most valuable to their customers and strip down inference code to optimize cost and time. It might also be helpful to identify ways to make the GPUs at the heart of Generative AI training and inferencing tasks more efficient by developing a strategy to consider additional GPU capacity coming online. With this, it is good to run workloads on premises and monitor their cloud provider’s effect on sustainability.

In conclusion, with Generative AI an enabler to Innovate for Impact and an important strategy to meaningfully improve outcomes, please contact the author or CAIL to learn more.

Dec 14, 2023     by    George Lawton / CAIL Innovation commentary
info@cail.com            www.cail.com              905-940-9000