In the modern digital era, where innovation meets capital, a group of titans have risen, asserting their dominance not just in their respective industries but in the global marketplace. 2023 was no ordinary year for the S&P 500, but amidst its constellation of companies, seven stars shone brighter than the rest. Apple, Microsoft, Amazon, Alphabet (the parent of Google), Nvidia, Tesla, and Meta – these are the Magnificent 7.
This year, they’ve demonstrated not just resilience but extraordinary growth, surpassing expectations and reshaping the future in real-time. Journey with us as we delve into the remarkable narratives behind these behemoths, exploring the secrets of their success and the impacts of their expansive reach with a focus on Artificial Intelligence (AI).
Apple
Apple’s commitment to integrating advanced technologies into its ecosystem is evident not only in its hardware but also in its software endeavors. While the tech giant has delved into open-source technologies like the Darwin kernel for its OS and the WebKit browser engine to ensure optimal compatibility with web content, its AI pursuits have historically been more guarded. Conventional machine learning models deployed by Apple handle tasks such as recommendations, photo identification, and voice recognition. However, these haven’t dramatically influenced Apple’s overarching business strategy, until the introduction of Stable Diffusion.
Stable Diffusion’s emergence from the open-source realm presented Apple with a significant opportunity. The model’s uniqueness lies not just in its open-source nature but also in its compact size. This allowed the model to be initially compatible with certain consumer graphics cards and, with further optimization, even Apple’s iPhones. This optimization journey had two significant phases: firstly, Apple fine-tuned the Stable Diffusion model itself, an effort made possible by its open-source status. Following this, Apple updated its OS, ensuring it was harmonized with the company’s proprietary chips, leveraging its integrated device architecture.
This strategic move also hints at Apple’s future intentions. Apple’s “Neural Engine” on its chips, designed explicitly for AI functionalities, has been in circulation for several years. Given the advancements with Stable Diffusion, it’s plausible to speculate that forthcoming Apple chips might be even more attuned to this model. Integrating Stable Diffusion into Apple’s operating systems could also be on the horizon, offering app developers easily accessible APIs.
This integration can potentially revolutionize the app landscape. With “good enough” image generation capacities embedded within Apple’s devices, developers can harness these features without the overheads of extensive backend infrastructure, reminiscent of the viral success of Lensa. Such advancements could tilt the balance in favor of Apple and smaller app creators by equipping them with unique capabilities and distribution platforms.
However, this localized AI capability might pose challenges to centralized image generation platforms like Dall-E or MidJourney and the cloud-based services supporting them. While these centralized services might offer superior image generation, the convenience and accessibility of built-in capabilities on Apple devices can reshape the market dynamics. The future might see a diminished market for centralized services as localized computing capabilities gain prominence, especially within Apple’s extensive ecosystem.
Microsoft
Microsoft, one of the tech industry’s Goliaths, has displayed impressive agility and foresight in positioning itself for the AI-driven future. The company’s strategy reflects a blend of cloud services, strategic partnerships, and leveraging its existing products to embrace next-gen AI technologies. Let’s take a closer look at each strategic direction:
1/ Cloud & Infrastructure Investment. Similar to AWS, Microsoft offers a cloud service that boasts GPU support, essential for running sophisticated AI models. But perhaps the company’s most strategic move in the AI domain is its exclusive partnership with OpenAI. This alliance isn’t just a significant financial commitment but a long-term investment. Given OpenAI’s trajectory, Microsoft seems poised to lay the groundwork for the AI era’s infrastructure, underpinning the growth of one of the future’s top tech enterprises. An AI chips developed by Microsoft is coming this year.
2/ Bing’s potential: integrating ChatGPT-like capabilities could potentially disrupt this space, offering Bing a unique advantage and the possibility of capturing significant market share. This risk, given Bing’s current standing, might very well be a calculated one worth taking. Meanwhile, there are rumors that Microsoft tried to sell Bing to Apple in 2020.
3/ GPT in Productivity apps: GPT will be integrated into Microsoft’s suite of productivity applications. Drawing lessons from the AI-coding tool GitHub Copilot, built atop GPT, Microsoft will start deploying an AI-assisted user experience that’s genuinely beneficial. Microsoft’s challenge, and one they seem to be addressing, is ensuring that AI enhances productivity without becoming intrusive – ensuring no repeat of the infamous “Cortana” assistant.
4/ Subscription & added functionality: Microsoft’s move to integrate new functionalities into its offerings, potentially at an additional cost, aligns seamlessly with its subscription business model. This strategy might see the tech giant, once perceived as vulnerable to disruption, not only evolving due to these changes but also capitalizing on them to ascend to unprecedented heights.
NVIDIA
NVIDIA has come a long way from its modest beginnings in 1993. Initially recognized for its prowess in creating graphics chips tailored for high-end gaming, NVIDIA’s strategic foresight and innovation placed it at the vanguard of the AI revolution. This transformation wasn’t by mere happenstance; it was the result of a keen awareness of technology trends and a series of strategic decisions.
NVIDIA’s specialization in graphics processing units (GPUs) began as a focus on gaming PCs. Over the years, they found that the parallel processing capability of GPUs had other applications, most notably in enhancing the computing performance of traditional CPUs. This realization attracted giants like Google, Microsoft, and Amazon, propelling NVIDIA’s data center segment revenues from a mere $339 million in 2016 to over $15 billion just six years later. With the rise of generative artificial intelligence and the launch of AI chatbots like ChatGPT, NVIDIA’s chips, known for their excellence in AI model training and inference, became even more coveted. This surge in demand caused NVIDIA’s data-center revenue to skyrocket, with analysts predicting it to exceed $60 billion in the coming fiscal year.
However, NVIDIA’s ascendancy isn’t solely attributed to its hardware; software plays an indispensable role. In 2006, NVIDIA pioneered the Compute Unified Device Architecture (CUDA), a revolutionary programming language for GPUs. This allowed for efficient problem-solving that was previously deemed financially infeasible. With over 250 software libraries catered to AI developers, CUDA has positioned NVIDIA as the preferred platform for AI development. The widespread adoption of CUDA, with over 25 million downloads in just one year, showcases its dominance and offers NVIDIA a competitive advantage that rivals find challenging to replicate.
While competitors like Advanced Micro Devices (AMD) are diving into the AI realm with their offerings and major clients like Amazon and Google are exploring in-house chip designs, NVIDIA’s challenge lies in sustaining its edge. Despite its high chip costs, which motivate clients to consider alternatives, NVIDIA’s focus should remain on ensuring the unmatched performance of its combined chip and software offerings. Drawing lessons from its own past and the wisdom of tech legends, NVIDIA’s continued success might just hinge on the mantra: “Only the paranoid survive“
Meta
Meta is leveraging AI to refine and personalize the content its users receive. This commitment to personalization emphasizes delivering tailored content via Meta’s platforms.
Sam Lessin perfectly described the 5 stages of Social Media. The last stage is represented by ‘AI-generated content’. For that you need powerful chips, hence Meta’s huge investments in data centers & chips.
A key area of focus for Meta is the evolution of its advertising tools. AI has the potential to revolutionize this space, taking charge of tasks like generating content and A/B testing both copy and visuals. Given Meta’s proficiency in scaling such capabilities, it’s poised to lead in this domain.
It’s worth noting that Meta’s advertising approach is primarily geared towards initial consumer engagement. Their aim is to introduce consumers to new products, services, or apps, ensuring they become aware of offerings they weren’t previously familiar with.
This experimental approach is particularly conducive to AI integration. Even though AI-driven content generation may have some associated costs, they are considerably lower than human-driven processes.
Amazon
Amazon integrates artificial intelligence (AI) throughout its services. While the direct application of AI in consumer-oriented tasks like image and text creation may not be immediately apparent, Amazon Web Services (AWS) stands out as a significant player. AWS offers cloud-based access to high-powered graphics processing units (GPUs), some of which are utilized for training AI models. For instance, whenever a user creates an image using MidJourney or designs an avatar in a dedicated app, the AI inference process is executed on a cloud GPU.
However, AI processes, especially inference, come with associated costs. In simpler terms, employing AI to create or analyze something introduces incremental expenses. It’s anticipated that as technology advances, these costs will decline. AI models are likely to become more resource-efficient, and the chips that power them are expected to improve in both speed and efficiency. Additionally, as cloud services grow and cater to a broader range of products, they’re expected to reap the benefits of economies of scale, maximizing the return on their investments.
Yet, the extent to which an integrated full-stack approach might impact costs remains a topic of discussion. There’s also the potential of shifting AI inference to local devices, reducing the reliance on cloud services.
Alphabet/Google
Google is making strides in the world of artificial intelligence (AI) and large language models (LLMs). Basically, they invented the transformer, which lies at the base of LLMs. Also, a couple of years ago, they introduced their Language Model for Dialogue Applications (LaMDA), marking a notable advancement in conversational AI. Recently, the tech giant revealed their newest project named Bard, a conversational AI tool powered by LaMDA. The company plans to trial Bard with a select group of testers before releasing it to the broader public. This innovative tool is designed to answer a wide array of queries, ranging from space telescope details to insights about top football players.
Additionally, Google boasts superior image generation skills, surpassing Dall-E and others. Yet, these remain as assertions since no tangible products have surfaced.
Furthermore, Google’s legacy in AI-driven Search enhancements is evident through its groundbreaking models like BERT and the more advanced MUM. Sources indicate that Google is on the brink of launching new AI features in Search to simplify and condense complex information, enhancing user experience. In addition to their own products, Google seems keen on fostering a broader AI ecosystem. They’re set to introduce developers to their Generative Language API, suggesting an ambitious plan to cultivate a range of AI-driven tools and applications. This initiative is supported by collaborations with notable tech entities such as Cohere, C3.ai, and the newly-announced partnership with Anthropic.
In the realm of search, generative AI could signify a potential disruption rather than a mere enhancement. Initially, disruptive innovations might not match existing standards, leading to them being overlooked by incumbents convinced of their product’s superiority. However, the danger lies in the disruptive tool’s evolution, especially when the existing product becomes cumbersome, paralleling the direction Google Search seems to be heading.
Tesla
Morgan Stanley (MS) anticipates that Tesla’s value could surge by $500 billion due to its Dojo supercomputer. MS believes that Dojo could unlock diverse revenue opportunities, especially through the broader implementation of robotaxis and software services. The potential of Dojo for Tesla has been likened to the transformative impact of Amazon Web Services on Amazon’s profitability. Dojo, which Tesla has been developing for about five years, is primarily aimed at training AI systems for intricate tasks, including improving Tesla’s Autopilot and spearheading its “Full Self-Driving” initiatives. The analysts project that Dojo could tap into markets beyond just car sales. On this news, the Tesla’s market capitalization grew my more than the size of BMW’s total market capitalization. Crazy!
Additionally, Tesla is working on a general-purpose humanoid robot for mundane or unsafe tasks and is hiring specialists for this purpose. They’re also focusing on AI chips for their vehicles, refining neural networks for better vehicular perception and control, and creating tools for effective evaluation of these systems.
***
At the heart of their meteoric rise lies a common thread – the adept harnessing of the AI boom. These companies didn’t just ride the wave; they were the masterful surfers who anticipated its every twist and turn, innovating and integrating AI into the core of their business models. While many companies still struggle with the adoption of artificial intelligence, these incumbents redefined their industries, pushing boundaries and setting gold standards.
In an age where information is the new currency and technology the language, the Magnificent 7 have proven that foresight, when paired with action, can transform the AI dream into tangible, trailblazing realities. As we venture into 2024 and beyond, their legacies serve as a powerful reminder of the limitless potential awaiting those who dare to explore uncharted horizons.