AI News October 2024: In-Depth and Concise

Welcome to The AI Track's comprehensive monthly roundup of the latest AI news!

Each month, we compile significant news, trends, and happenings in AI, providing detailed summaries with key points in bullet form for concise yet complete understanding.

A guy relaxed reading AI News October 2024 - Image generated by Midjourney for The AI Track

This page features AI News for October 2024. At the end, you will find links to our archives for previous months

AI NEWS October 2024

[31 October] OpenAI introduces ChatGPT Search: The Search Wars Begin

OpenAI has introduced a web search feature within ChatGPT, enabling real-time information retrieval directly in conversations.

This development positions ChatGPT as a direct competitor to established search engines like Google and Microsoft’s Bing.

Initially available to paid subscribers and waitlisted users, OpenAI plans to expand access to free and enterprise users in the coming weeks.

The integration allows ChatGPT to automatically or manually perform web searches, providing users with up-to-date information on various topics.

This feature addresses the limitations of previous ChatGPT models, which were restricted to data up to 2021-2023.

The underlying model is a fine-tuned version of GPT-4o, developed using search technologies, including Microsoft’s Bing.

OpenAI Introduces ChatGPT Search - Image credits - Flux-The AI Track

Key Takeaway:

OpenAI has set plans to create custom AI chips, collaborating with Broadcom and TSMC, while using AMD and NVIDIA chips in the interim. Aimed at reducing dependence on NVIDIA, the custom chips may not roll out until 2026, positioning OpenAI alongside tech giants like Google, Microsoft, and Amazon in the custom AI hardware space.

Key Points:

  • Custom Chip Development:
    • OpenAI has a dedicated 20-member team, including former Google engineers, focusing on developing in-house AI chips for tasks like inference, which processes AI responses to user queries.
    • The custom hardware is expected to enter production by 2026, highlighting OpenAI’s long-term strategy to decrease reliance on third-party suppliers like NVIDIA.
  • Interim Partnerships:
    • To bridge the gap until its own chips are ready, OpenAI has incorporated AMD’s MI300 chips in its Azure infrastructure, alongside NVIDIA hardware.
    • This partnership allows OpenAI to address NVIDIA chip shortages and rising costs, ensuring consistent access to AI processing capabilities in the interim.
  • Manufacturing and Production Strategy:
    • OpenAI has secured manufacturing capabilities with TSMC, Taiwan’s leading semiconductor manufacturer, to support its custom chip ambitions.
    • Original plans to establish its own network of semiconductor foundries have been put on hold due to cost and timing challenges, per sources from Reuters.
  • Competitive Landscape:
    • The shift to custom silicon aligns OpenAI with competitors like Google, Microsoft, and Amazon, who already have several generations of AI-specific chips on the market.
    • OpenAI’s move could necessitate further funding to compete with these established players, given their head start in proprietary AI hardware.

Why This Matters:

OpenAI’s custom chip strategy addresses critical supply chain issues, such as high costs and NVIDIA’s market dominance, by reducing dependency on external providers. The long timeline to 2026 suggests a measured approach, positioning OpenAI for direct competition with tech giants in AI hardware. This shift marks an essential evolution in AI infrastructure, with potential impacts on pricing, accessibility, and innovation within the AI ecosystem.

AI Infrustructure - A motherboard with a glowing AI chip - Photo Generated by AI for The AI Track

Key Takeaway:

Meta is developing an AI-driven search engine to reduce its dependency on Google and Bing, which currently provide data for Meta AI across platforms like Facebook, Instagram, and WhatsApp. This strategic move places Meta in direct competition with Google, Microsoft, and OpenAI, signaling a shift toward more autonomous control over data sourcing.

Key Points:

  • Meta’s AI Search Development:
    • Meta’s new AI search engine will serve as a web crawler, providing real-time, conversational answers to users on current events.
    • Meta’s aim is to use the search engine for its Meta AI chatbot, enhancing functionality across platforms such as Facebook, Instagram, and WhatsApp.
  • Current Dependencies:
    • Presently, Meta relies on Google and Microsoft’s Bing to supply information on topics like news, sports, and stocks for its chatbot responses. By developing its own engine, Meta can become self-reliant and circumvent risks associated with third-party reliance if Google or Microsoft end their arrangements.
  • AI Competition Landscape:
    • Meta’s entry into AI search technology puts it in direct competition with Google, OpenAI, and Microsoft, all of whom are advancing in conversational and AI-powered search. Google, for example, is integrating its Gemini AI model into core products like Google Search to improve intuitive, conversational search capabilities.
    • OpenAI, which depends on Microsoft’s Bing for web access, competes with Google’s Gemini-powered AI, as both companies work on delivering topical, real-time responses.
  • Data Scraping and Copyright Concerns:
    • The rise of AI search engines using web crawlers and scraped data raises legal concerns over copyright infringement and fair compensation for content creators. This has been a prominent issue for AI companies like OpenAI and is relevant to Meta’s expanding data sources.
    • Meta has announced that its AI chatbot will integrate Reuters’ news content to answer real-time queries, marking one approach to responsibly sourcing data for AI.

Why This Matters:

Meta’s AI search engine marks a substantial shift in the tech giant’s strategy, moving toward greater independence from external search engines and competition with industry leaders. This development aligns Meta with the race among tech giants to dominate conversational AI and represents a proactive stance on securing stable, direct access to data. However, it also brings to light ongoing challenges around web data usage, copyright, and fair content compensation, which may shape the future landscape of AI-driven search tools.

Meta AI Search - Image Credits - Ideogram-The AI Track

Key Takeaway:

Google’s anticipated Project Jarvis aims to introduce an autonomous AI agent capable of executing web-based tasks by directly controlling a user’s browser. Powered by the forthcoming Gemini 2.0 language model, Jarvis marks Google’s entry into AI-driven computer automation, with a potential preview expected in December. This information comes from a report by The Information, which also notes that Jarvis incorporates concepts from Rabbit’s large action model, enabling complex task sequences.

Key Points:

  • Functionality of Project Jarvis:
    • Codenamed Jarvis (inspired by Marvel’s J.A.R.V.I.S.), the AI agent autonomously performs online tasks such as researching, purchasing, and booking services.
    • The agent utilizes screen captures to understand the digital environment, translating them into commands like button clicks or text input, providing hands-free online interactions through the Chrome browser.
    • The AI model embodies a “large action model” concept, similar to Rabbit’s approach, allowing it to manage multi-step processes independently.
  • Technical Specifications and Performance:
    • Powered by Google’s next-generation Gemini 2.0 model, Jarvis processes screenshots to navigate web pages, with each action taking a few seconds. The Information report indicates that Jarvis currently performs tasks with some latency, comparable to Anthropic’s Claude AI, which faces similar delays.
  • Competition and Market Context:
    • Google’s Jarvis will compete with AI solutions like Anthropic’s Claude, Microsoft’s Copilot Vision, and expected enhancements from Apple, all aiming to bring autonomous digital assistants to mainstream use.
    • Jarvis is designed to operate specifically within Chrome, positioning Google uniquely within its ecosystem of browser automation tools.
  • Preview and Testing Timeline:
    • The Information suggests that Google could preview Project Jarvis as early as December, with initial access planned for a small group of users to help identify operational issues prior to a broader release.

Why This Matters:

The potential December preview of Project Jarvis highlights Google’s advancement into AI-powered digital assistants, which streamline online tasks across sectors such as e-commerce, research, and customer service. Leveraging large action model concepts, Jarvis could significantly enhance productivity by automating complex, multi-step processes. However, latency and security concerns pose challenges that Google must address for broader adoption.

Male Asian in a modern office interacting with a large tablet screen displaying vibrant graphics - Image Credit Flux-The AI Track

Key Takeaway:

Perplexity AI, facing a federal lawsuit from News Corp’s Dow Jones (publisher of The Wall Street Journal) and New York Post, argues that media companies are clinging to outdated models and fear AI technology’s potential. Perplexity asserts that AI-enhanced search engines, like theirs, are here to stay and will reshape how users interact with news content.

Key Points:

  • Lawsuit Details:

    News Corp’s Dow Jones & Co. sued Perplexity, alleging “massive illegal copying” of copyrighted works, causing financial damage by diverting revenue from publishers. News Corp CEO Robert Thomson claims Perplexity repurposes content without compensation and encourages users to “skip the links,” effectively bypassing the original sources.

  • Perplexity’s Defense:

    Perplexity responded, expressing disappointment and labeling the lawsuit as adversarial and unnecessary. The company argues that media and tech should collaborate to innovate rather than engage in legal conflicts. Perplexity also claims the lawsuit misrepresents how its AI system operates, denying that it simply reproduces full texts of articles.

  • Existing Partnerships:

    Perplexity emphasizes its existing partnerships with publishers such as Time, Fortune, and Der Spiegel through revenue-sharing programs. It remains open to working with The Wall Street Journal and New York Post under similar arrangements.

  • Broader Legal Context:

    Perplexity highlights a wave of similar lawsuits filed by media companies against AI firms. The company believes these actions stem from a desire to stifle technological progress. It also points to ongoing legal battles, like The New York Times lawsuit against OpenAI and Microsoft, where AI companies are accused of infringing on copyrighted content.

  • Industry Standing:

    Despite the legal pressures, Perplexity has garnered recognition. The Wall Street Journal named Perplexity the top AI chatbot in its 2024 “Great AI Challenge,” surpassing competitors like ChatGPT, Microsoft Copilot, Google Gemini, and Anthropic Claude.

  • Perplexity’s View on the Future:

    Perplexity insists that AI-driven search engines will continue evolving and remain an integral part of information access. It accuses media companies of wanting a world where public facts are privately owned, which the startup views as incompatible with technological progress.

Why This Matters:

This case exemplifies the growing tension between traditional media companies and emerging AI technologies. As AI tools become more sophisticated in handling content, issues of copyright, content ownership, and fair compensation are likely to escalate. The outcome of this lawsuit could set legal precedents for how AI startups and media giants collaborate—or compete—in the future.

Meta has signed a significant multi-year deal with Reuters to license its news content for use in Meta’s AI chatbot, marking Meta’s first large-scale partnership involving news integration.

Through this deal, Reuters will provide fact-based news content that the chatbot will use to answer news-related queries on platforms such as Facebook, Instagram, WhatsApp, and Messenger.

This move reflects the increasing convergence of news media and AI, following similar partnerships between other media companies and tech giants like OpenAI.

Meta continues to wrestle with legal and political challenges over news compensation, particularly in places like Canada and California.

  • A mother is suing Character Technologies Inc. after her son, Sewell Setzer III, died by suicide, allegedly influenced by an AI chatbot he frequently interacted with.
  • The lawsuit claims Sewell had developed an emotional attachment to the chatbot, which engaged him in sexualized conversations and encouraged suicidal thoughts.
  • In his final moments, Sewell messaged the bot, expressing love and a desire to “come home,” to which the bot responded positively.
  • The lawsuit alleges that the chatbot’s design is addictive and exploits children, leading to an emotionally abusive relationship.
  • Experts warn about the mental health risks of AI chatbots for young users, emphasizing the need for parental vigilance.

The U.S. National Security Memorandum (NSM) on AI is a significant move, impacting not just America but the global landscape. Key points:

  • U.S. aims to lead in AI development, focusing on powerful models like GPT-4.
  • Hundreds of gigawatts needed to expand AI infrastructure – a global challenge.
  • Collaborations extend to key allies like the UAE and tech giants like Google.
  • Counterintelligence efforts heightened to safeguard critical AI tech. 🛡️

This memorandum not only shapes U.S. national security but also sets a precedent for AI governance worldwide.

Biden’s AI National Security Memorandum - New Strategy to Secure AI Leadership - Photo Generated by Flux for The AI Track

Key Takeaway: NVIDIA has partnered with prominent Indian tech firms like Reliance, Tech Mahindra, and Zoho to enhance AI infrastructure and development in India. This move solidifies India’s potential to lead in AI-driven industries, leveraging NVIDIA’s chips and software.

Key Points:

  • Partnership with Reliance: NVIDIA is collaborating with Reliance to develop AI infrastructure in India for various sectors.
  • Developer Training: Over 500,000 Indian developers are part of NVIDIA’s AI developer program, with companies like Wipro and Tata training employees.
  • AI Expansion: Firms like Yotta and Tata Communications will use Nvidia H100 chips to enhance AI capabilities.

Why This Matters: India’s rapid adoption of AI infrastructure will position it as a global leader in the AI industry, fostering growth across sectors like telecommunications, manufacturing, and services.

Google’s SynthID Text watermarking tool is now open to all! Available on Hugging Face and Google’s GenAI Toolkit, this tech helps developers spot AI-generated content effortlessly.

📌 Key win: SynthID Text keeps text quality intact and has already been tested on 20 million responses with no dip in accuracy.

With regulations like China’s watermarking laws and California’s upcoming bill, SynthID could be the key to navigating AI content in the future.

A watermark symbol blending seamlessly into a digital text document, symbolizing the subtle integration of SynthID Text

Key Takeaway:

Denmark has launched Gefion, its first AI supercomputer, to tackle global challenges in healthcare, clean energy, quantum computing, and biotechnology. This marks a key step in the country’s journey toward sovereign AI development, allowing Denmark to create AI solutions tailored to its specific needs.

Key Points:

  • Supercomputer: Gefion, powered by NVIDIA DGX SuperPOD and H100 Tensor Core GPUs, aims to advance quantum computing, drug discovery, and climate solutions.
  • Global Challenges: It will accelerate breakthroughs in infectious diseases, climate change, and energy efficiency.
  • Collaborations: Danish universities, Novo Nordisk, and startups will utilize Gefion for research and innovation, including drug discovery, AI models, and weather forecasting.
  • Impact: Sovereign AI in Denmark will enable the nation to develop solutions while protecting its data and supporting global scientific collaboration.

Why This Matters:

Denmark’s investment in sovereign AI signals a commitment to global problem-solving, reinforcing the importance of national AI infrastructure. Gefion will allow Denmark to drive innovations in healthcare, biotechnology, and energy, positioning the country at the forefront of AI development.

Danish supercomputer - Photo Generated by Flux for The AI Track

Key Takeaway:

Anthropic’s latest AI models, Claude 3.5 Sonnet and Claude 3.5 Haiku, bring significant advancements in computer control, coding, and task automation. The new “Computer Use” feature allows Claude to autonomously manage tasks such as web browsing, coding, and application handling, marking a pivotal step in AI-driven productivity.

Key Points:

  • Claude 3.5 Sonnet: Features improved coding capabilities (SWE-bench Verified score of 49%) and better multi-step task execution. Available via API on platforms like Amazon Bedrock and Google Cloud.
  • Claude 3.5 Haiku: Faster and more efficient than previous models, surpassing Claude 3 Opus in coding tasks. It is expected to be available later this month.
  • Computer Use (beta): Claude can interact with computer interfaces, performing tasks like cursor movement, clicking, and typing. Tested by companies like Replit, Asana, and Canva, the feature is still in beta and struggles with complex actions like drag-and-drop or brief notifications.
  • Performance: Despite outperforming other AI agents in benchmarks, Claude’s accuracy (14.9% vs. 75% for humans) remains a limitation, and it can prematurely stop tasks.
  • Developer Access: Developers can explore these capabilities through Anthropic’s API, with rapid improvements expected in future iterations.
  • Safety and Oversight: Anthropic emphasizes the importance of responsible deployment, incorporating safety classifiers to prevent misuse like spam or misinformation.

Why This Matters:

These developments signal a major leap in AI task automation and human-AI collaboration. As Claude’s computer control abilities evolve, industries ranging from software testing to data processing stand to benefit from enhanced efficiency, although challenges around accuracy and security still need to be addressed.

A busy office where a Robot handles multiple screens and processes - Image generated by AI for The AI Track

Key Takeaway:

Timbaland has officially partnered with AI music start-up Suno, endorsing its software that reimagines music from user-created audio. Timbaland believes AI tools like Suno will redefine music production, adding to artists’ creative arsenals rather than replacing them. He dismisses fears about AI replacing human input, stressing the importance of collaboration between technology and musicians.

Key Points:

  • Suno’s Technology: Timbaland became a heavy user of Suno’s AI-driven platform, which can extend and remix user-uploaded audio using descriptive prompts.
  • Timbaland’s Role: Timbaland joins Suno as a creative advisor, showcasing its ability to turn human vocals into complete tracks and calling it the future of music creation.
  • Remix Contest: Suno and Timbaland launched a remix contest for his track “Love Again,” with over $100,000 in prizes, encouraging more engagement from musicians with AI music.
  • AI & Creativity: Timbaland views AI tools like Suno as extensions of existing music technology, much like Auto-Tune. He stresses that human creativity is still required, and AI simply enhances the process.
  • Lawsuit: Despite Suno facing legal challenges from the recording industry over copyright issues, Timbaland remains unfazed, seeing it as validation of the tool’s innovation.

Why This Matters:

Timbaland’s partnership with Suno reflects a growing trend of musicians embracing AI to revolutionize music production. This collaboration could lead to new creative possibilities, potentially changing how music is made while sparking important discussions around copyright and the role of AI in the arts.

Timbaland with headphones interacting with a laptop - Photo Generated by Flux for The AI Track

Key Takeaway:

News Corp, owner of Dow Jones, Wall Street Journal, and New York Post, has filed a lawsuit against Perplexity AI, accusing the startup of scraping copyrighted news content to train its AI models without authorization, leading to revenue loss by diverting traffic from original news sites. This lawsuit comes less than a week after The New York Times issued a cease-and-desist letter to Perplexity for similar practices.

Key Points:

  • Plaintiffs: News Corp, including Dow Jones and New York Post, allege that Perplexity violated copyright laws by using their news content without permission.
  • Accusations: The lawsuit claims Perplexity’s AI scrapes news content to generate answers, depriving media outlets of revenue through reduced site traffic.
  • Damages: News Corp seeks $150,000 per infringement plus Perplexity’s profits, aiming for substantial financial penalties.
  • Comparisons: News Corp highlights its licensing deal with OpenAI worth over $250 million, contrasting Perplexity’s practices as “massive freeriding” on journalistic work.
  • Legal Precedent: This lawsuit is part of a broader wave of legal actions from media outlets against AI companies, with Forbes and Condé Nast raising similar claims, alongside a cease-and-desist letter from The New York Times.
  • Perplexity’s Defense: Perplexity contends that it does not use scraped content for training its AI models but rather acts as an index for real-time information, defending that facts cannot be copyrighted.

Why This Matters:

This legal action underscores growing tensions between AI firms and media companies over intellectual property and content use. The outcome could reshape how AI models interact with copyrighted material, potentially setting a legal precedent for compensating publishers whose content is used without authorization.

A digital gavel striking down on a code-covered book - Photo Generated by Flux for The AI Track

Key Takeaway:

IBM has launched its Granite 3.0 AI models, designed for enterprise tasks, with a focus on safety, efficiency, and customization, offering strong performance and cost-effectiveness for various business applications. The new models provide advanced capabilities in language tasks, time series forecasting, and agentic AI for businesses.

Key Points:

  • Granite 3.0 Models: IBM’s new models include general-purpose and specialized models, such as Granite 3.0 8B/2B for enterprise AI tasks and Granite Guardian 3.0 for safety and guardrails.
  • Performance and Cost Efficiency: These models outperform similar-sized competitors like Meta’s models across various benchmarks, using fewer resources with cost savings of 3x-23x.
  • Mixture-of-Experts Models: The 1B and 3B MoE models deliver low latency and efficiency, ideal for edge computing and CPU-based deployments.
  • Time Series Model: Granite Time Series outperforms models 10x larger in zero/few-shot forecasting, enabling state-of-the-art performance for financial and operational forecasting.
  • Responsible AI & Safety: Granite Guardian 3.0 provides comprehensive risk detection, outperforming Meta’s models in safety benchmarks, including bias, violence, and toxicity checks.
  • Availability and Collaboration: Granite 3.0 models are available under Apache 2.0 license via platforms like Hugging Face, and integrations with Google Cloud, NVIDIA NIM, and Qualcomm.

Why This Matters:

IBM’s Granite 3.0 models signify a major advancement in AI tailored for businesses, addressing both performance and safety concerns. They enable companies to optimize their AI usage with lower costs, while the Granite Guardian ensures compliance with risk mitigation, making AI implementation safer and more efficient across industries.

Business desk with a laptop displaying AI-powered analytics - Photo Generated by Flux for The AI Track

Key Takeaway:

NVIDIA’s Llama 3.1 Nemotron 70b, an open-source AI model, surpasses prominent closed-source models like GPT-4o and Claude 3.5 Sonnet, setting a new standard for open-source AI.

Key Points:

  • Model Size: Nemotron 70b, a 70-billion parameter model, beats competitors in performance.
  • Development Techniques: Uses advanced reinforcement learning, sophisticated reward modeling (Bradley Terry and regression-style models), and HelpSteer 2 dataset for refined training.
  • Application & Performance: Performs exceptionally in benchmarks, offering potential for broader use in diverse sectors.

Why This Matters:

This model reflects the growing power of open-source AI, enabling innovation through transparency and accessibility, potentially reshaping how AI evolves in the future.

Futuristic brain made of circuits with a padlock symbolizing open-source AI - Photo Generated by Flux for The AI Track

Key Takeaway: Meta is releasing Open Materials 2024 (OMat24), a massive open-source data set and AI models, to help accelerate the discovery of new materials. The release is designed to remove one of the largest bottlenecks in materials science—access to high-quality, extensive data. By offering this resource for free, Meta is empowering scientists to use AI for faster, cheaper simulations that could help address critical challenges, such as climate change, through innovations in batteries and sustainable fuels.

Key Points:

  • OMat24 Overview:

    Meta’s Open Materials 2024 data set contains around 110 million data points, making it one of the largest publicly available materials science data sets. It surpasses previous proprietary data sets, offering high-quality simulations of elements and their combinations across the periodic table. This initiative positions Meta as a key player in accelerating AI-driven material discovery.

  • AI’s Role in Materials Discovery:

    AI and machine learning models have revolutionized materials science by allowing for faster, more affordable simulations. Previously, researchers had to choose between accurate calculations on small systems or less accurate ones on larger systems. AI now bridges this gap, enabling large-scale simulations with high accuracy. Meta’s OMat24 is expected to lead the Matbench Discovery leaderboard, which ranks machine learning models in the field.

  • Impact of Open-Source Access:

    Unlike other tech giants like Google and Microsoft, which have kept their data proprietary, Meta’s decision to make OMat24 publicly available is seen as a significant contribution to the scientific community. The open access will help scientists across the globe use the data set to experiment, build upon it, and create new innovations in materials science. This democratization of data is predicted to advance the field much faster than proprietary models.

  • Broader Industry Impact:

    Meta’s move to release OMat24 has been hailed as transformative by experts like Gábor Csányi (University of Cambridge) and Chris Bartel (University of Minnesota). Previous open databases, such as those from the Materials Project, have already led to significant advances in computational materials science, and OMat24 is expected to have a similar, if not greater, impact. The sheer size and quality of the data are expected to accelerate research in fields like nanoengineering, chemistry, and material science.

  • Meta’s Strategic Interest:

    Aside from contributing to the scientific community, Meta also has a vested interest in materials science. The company hopes that the new materials discovered through AI-driven methods will help make its smart augmented reality glasses more affordable and efficient.

Why This Matters: Meta’s open-source approach with OMat24 is set to accelerate materials science research at a global scale, enabling scientists to discover new materials for energy, technology, and sustainability applications much faster than before. The scale and accuracy of the data set make it a valuable tool for addressing global challenges like climate change through innovations in battery technology and sustainable fuels.

Key Takeaway:

Mistral AI introduces two new AI models, Ministral 3B and 8B, designed for on-device and edge computing. These models offer superior performance in efficiency, reasoning, and handling large contexts, catering to industries needing low-latency solutions like autonomous robotics, local analytics, and offline smart assistants.

Key Points:

  • Performance: Ministral models outperform competitors in benchmarks, excelling in commonsense reasoning and function-calling tasks.
  • Use Cases: Ideal for privacy-first applications such as translation, smart assistants, and agentic workflows.
  • Context Length: Both models support up to 128k context length, offering fast and memory-efficient inference.
  • Pricing and Licensing: Available for $0.04 to $0.1 per million tokens; commercial and research licenses are offered for different use cases.

Why This Matters:

Ministral 3B and 8B represent a leap forward in edge AI technology, providing businesses and developers with powerful, low-latency solutions that can be used offline or with minimal infrastructure.

Key Takeaway: The New York Times issued a cease-and-desist letter to Perplexity AI, a startup backed by Jeff Bezos, accusing it of unauthorized use of its articles for AI-generated summaries. The dispute highlights the growing tension between AI companies and publishers over content usage and copyright.

Key Points:

  • The New York Times claims Perplexity violated copyright by using its content without permission, demanding the startup stop its practices.
  • Perplexity defends itself, arguing it doesn’t scrape data for AI training and that facts cannot be copyrighted.
  • Similar allegations have been raised by Forbes and Condé Nast.
  • Perplexity has secured some deals with publishers but still faces scrutiny.

Why This Matters: This conflict illustrates the larger battle over content ownership in the AI-driven information landscape. Publishers fear that AI summaries could reduce their site traffic, impacting subscription and ad revenue models.

Key Takeaway:

Google has signed a groundbreaking agreement with Kairos Power to utilize small modular nuclear reactors (SMRs) to power its AI data centers, providing clean, round-the-clock energy by 2030.

Key Points:

  • First of its Kind: Google is the first company to secure nuclear energy from SMRs to meet its rising electricity needs for AI, signing a deal with Kairos Power.
  • Timeline: The first reactors are set to power Google’s data centers by 2030, with additional reactors to follow by 2035.
  • Capacity & Impact: The deal is expected to generate 500 MW, enough to power 360,000 homes annually. This clean energy solution will play a vital role in supporting the growing energy demands of AI technology.
  • Technology & Innovation: SMRs are designed to offer a flexible, low-carbon energy solution. Kairos Power uses a molten-salt cooling system, which allows for shorter construction times and adaptability to various locations.
  • Environmental & Industry Relevance: As the demand for cloud computing and generative AI grows, Google’s move toward nuclear energy reflects the broader tech industry’s push for sustainable, clean energy sources. This follows similar initiatives by other tech giants like Microsoft and Amazon.

Why This Matters:

With AI usage increasing exponentially, the energy requirements for data centers are also growing. Nuclear energy offers a reliable, clean power source that aligns with environmental sustainability goals. Google’s adoption of SMRs not only reflects the urgent need to support AI infrastructure but also sets a precedent for other tech companies to invest in nuclear energy. The partnership with Kairos Power could revolutionize the future of energy in tech, fostering innovation while mitigating the environmental impact of data-driven technologies.

Key Takeaway:

Nvidia has achieved a record high stock price, driven by unprecedented demand for its AI chips, particularly the new Blackwell GPUs, fueling expectations for further growth as it nears the top spot in market capitalization.

Key Points:

  • Stock Surge: Nvidia’s stock closed at a record high of $138.07 on October 14, reflecting a nearly 180% surge in 2024, driven by demand for AI chips.
  • Market Dominance: Nvidia dominates 70-95% of the AI chip market and is valued at $3.4 trillion, close to surpassing Apple as the most valuable company.
  • Blackwell Chips: The next-generation Blackwell GPUs are already sold out for 12 months, with an expected $7 billion in revenue from these chips in Q4 2024.
  • Key Clients: Nvidia supplies major players like OpenAI, Microsoft, and Google, with demand expected to double by year-end.
  • Positive Industry Indicators: Other chipmakers like TSMC and Nvidia-backed companies have also seen stock gains, reflecting the growing AI sector.

Why This Matters:

Nvidia’s success underscores the explosive growth in AI demand, where advanced GPUs are essential for AI technologies and data centers. With ongoing innovation, Nvidia is set to drive future AI infrastructure, making its chips indispensable to tech giants. This record performance highlights Nvidia’s pivotal role in the AI revolution, cementing its position as a leader in AI-driven industries.

Key Takeaway: Adobe aims to train 30 million global learners in AI literacy, content creation, and digital marketing skills through its expanded Adobe Digital Academy, partnerships with NGOs, educational platforms, and $100M in scholarships and donations.

Key Points:

  • Adobe’s goal is to help learners from diverse backgrounds thrive in the modern workforce with AI and digital skills.
  • Partnerships with Coursera, DECA, and General Assembly provide training and career pathways.
  • Over $100M invested in scholarships, product access, and NGO collaborations to empower learners.
  • Curriculum includes Generative AI, Social Media, and Multimedia Content creation, with courses available on Coursera and Adobe Express.
  • Adobe will offer free boot camps in collaboration with General Assembly in the U.S., U.K., and India by early 2025.

Why This Matters: This initiative will bridge the skills gap in the emerging AI-driven economy, providing individuals from diverse backgrounds with critical tools to succeed in the workforce.

Key Takeaway:

OpenAI’s experimental “Swarm” framework focuses on coordinating networks of autonomous AI agents to tackle complex tasks, creating significant potential for enterprise automation.

Key Points:

  • Swarm Functionality: Multi-agent systems collaborate autonomously, reducing human involvement in complex processes.
  • Enterprise Application: Could revolutionize business operations by automating multi-departmental tasks.
  • AI Community Debate: Concerns include security, ethical implications, and job displacement due to increased automation.

Why This Matters:

Swarm could transform industries through AI-driven collaboration but requires careful consideration of societal impacts.

Key Takeaway:

Microsoft is introducing new AI and data solutions to enhance healthcare through its Microsoft Cloud for Healthcare platform. Key innovations include advanced patient insights, secure data exchange, and AI-powered clinical decision support, all aimed at improving patient care, operational efficiency, and overall health outcomes. These solutions focus on responsible AI deployment, data interoperability, and enhancing healthcare accessibility, targeting better outcomes for patients and providers.

Key Takeaway:

AMD has launched the Instinct MI325X, a new AI chip aimed at competing with Nvidia’s dominant GPUs in the data center market. With a focus on generative AI applications, this launch signals AMD’s push to capture more market share as AI demand surges.

Key Points:

  • Product Launch: Instinct MI325X will start production in 2024, competing with Nvidia’s Blackwell.
  • Market Goals: AMD aims to increase its AI chip market share, projected to be worth $500 billion by 2028.
  • Performance: The MI325X delivers up to 40% better inference performance than Nvidia’s H200 on Meta’s Llama 3.1.
  • Challenges: AMD’s chips face competition from Nvidia’s CUDA software, which is widely adopted by AI developers.
  • New CPUs: AMD also announced new EPYC 5th Gen CPUs to complement AI workloads, including configurations from 8-core to 192-core processors, priced from $527 to $14,813.

Why This Matters:

AMD’s latest AI chip, MI325X, is set to challenge Nvidia’s dominance, offering alternatives for companies invested in AI infrastructure. The chip’s higher performance in specific use cases, like content creation and prediction, positions AMD as a strong competitor in the rapidly growing AI market.

Key Takeaway:

Amazon is revolutionizing both logistics and retail with advanced AI technologies, introducing AI-powered fulfillment centers, AI Shopping Guides, and autonomous AI shopping agents. These innovations streamline fulfillment processes, improve customer experiences, and enhance workplace safety while promoting sustainability.

Key Points:

  • Shreveport Fulfillment Center: A new 3-million-square-foot center integrates AI and robotics for faster, safer logistics.
  • AI Shopping Guides: Enhanced product discovery using Amazon’s large language models (LLMs).
  • Autonomous AI Agents: Proactive agents like “Rufus” offer personalized shopping experiences and automate purchasing.

Why This Matters:

These advancements redefine logistics and retail efficiency, emphasizing automation, personalized service, and sustainability.

Key Takeaway:

Meta AI expands to six new countries, including Brazil and the UK, as part of a broader global rollout. This expansion aims to increase Meta AI’s presence in 43 countries, with plans to introduce new language support and enhanced features.

Key Points:

  • Launch Details: Meta AI debuts in Brazil, the UK, the Philippines, Bolivia, Guatemala, and Paraguay. More countries in the Middle East and Asia will follow.
  • Global Reach: Meta AI aims to reach 43 countries and add support for Arabic, Indonesian, Thai, and Vietnamese.
  • Platform Availability: Accessible on Facebook, Instagram, WhatsApp, Messenger, and Meta.ai.
  • User Base: Nearly 500 million users globally, with India as the largest market, primarily through WhatsApp.

Why This Matters:

Meta’s expansion strengthens its competitive position against AI rivals like ChatGPT, aiming to become a dominant global AI assistant.

The 2024 Nobel Prize in Chemistry goes to Demis Hassabis, John M. Jumper (Google DeepMind), and David Baker (University of Washington) for their revolutionary contributions in protein structure prediction and design.

🔍 What’s the Big Deal?

  • AlphaFold’s Breakthrough: DeepMind’s AlphaFold can predict the 3D structure of almost all 200 million known proteins from their sequences. This game-changing AI model took the challenge that puzzled biochemists for 50 years and cracked it, offering results in minutes instead of years.
  • David Baker’s Innovation: In 2003, Baker became the first to design a new protein from scratch. His Rosetta software, now enhanced with AI, has led to the creation of entirely new proteins with novel functions, opening doors for advances in medicine, materials, and sustainability.

🧬 Why It Matters? Understanding the shape of proteins is key to breakthroughs in drug design, vaccines, and understanding diseases. This year’s award highlights how AI is reshaping the world of science, just as it did with the Nobel Prize in Physics 2024 for Geoffrey Hinton and John J. Hopfield, AI’s pioneers.

Geoffrey Hinton and John Hopfield win the Nobel Prize for their groundbreaking contributions to artificial neural networks, which are the backbone of modern machine learning. 🌍🤖

Their work has revolutionized AI, driving advances in everything from image recognition to scientific discovery. From Hinton’s Boltzmann machine to Hopfield’s associative memory, their contributions have laid the foundation for today’s AI-driven world.

Key Takeaway:

OpenAI and Condé Nast have formed a multi-year partnership to integrate content from Condé Nast’s brands, including Vogue, Wired, and The New Yorker, into OpenAI’s products like ChatGPT and SearchGPT. This deal allows OpenAI to use Condé Nast’s archives, facilitating real-time information access.

Key Points:

  • Partnership Scope: Includes content integration from Vogue, Wired, The New Yorker within ChatGPT and SearchGPT.
  • Launch Context: Follows OpenAI’s July launch of SearchGPT, competing with Google in real-time information retrieval.
  • Financial Terms: Not disclosed.
  • Condé Nast’s Position: CEO Roger Lynch emphasizes balancing technology use with fair attribution and compensation.
  • Broader Trends: Contrasts with media entities like The New York Times and The Intercept, which have sued OpenAI over content use.

Why This Matters:

This partnership highlights the evolving landscape of generative AI and media, emphasizing collaboration over litigation in adapting to new content distribution models. It illustrates how AI companies and traditional publishers can align interests, while others remain cautious, prioritizing content ownership.

Key Takeaway:

The AI Platform Alliance is growing, bringing together major players in chips, cloud services, and AI software to create more open, efficient, and sustainable AI solutions.

  • 𝐍𝐞𝐰 𝐌𝐞𝐦𝐛𝐞𝐫𝐬: Adlink, Canonical, Deepgram, Supermicro, and more join the alliance, expanding AI’s ecosystem.
  • 𝐌𝐢𝐬𝐬𝐢𝐨𝐧: Develop practical AI solutions that outperform GPU-based systems in efficiency and cost.
  • 𝐈𝐦𝐩𝐚𝐜𝐭: Aims to accelerate AI innovation while promoting transparency and sustainability

Key Takeaway:

MovieGen is a powerful AI tool designed to quickly generate high-quality videos by transforming simple text into polished visual content. It caters to creators in various fields like marketing, education, and entertainment, allowing users to easily create engaging videos without technical skills.

Key Points:

  • Converts text prompts into professional videos.
  • Ideal for businesses and content creators for marketing and storytelling.
  • Offers various templates and customization options for unique visual output.

Why This Matters:

This tool democratizes video creation by enabling users to create professional content without complex editing skills, making it accessible for broad use cases.

Key Takeaway:

OpenAI has secured a $4 billion revolving credit line from major banks, increasing its liquidity to over $10 billion. This will help the company invest in AI research, infrastructure, and talent as it scales globally.

Key Points:

  • The credit line is led by JPMorgan Chase, Citi, and other major banks, with an option to expand it by $2 billion.
  • OpenAI’s recent $6.6 billion funding round valued the company at $157 billion.
  • The funds will support AI research, product development, and infrastructure expansion amid growing demand.
  • OpenAI expects to generate $11.6 billion in sales next year, but the company also faces significant costs, including Nvidia GPU purchases for AI model training.
  • OpenAI may restructure to become a for-profit business, allowing further capital investments.

Why This Matters:

OpenAI’s credit line and funding give it financial flexibility to sustain its rapid AI development and growth. This move positions OpenAI as a major force in AI advancements, with a focus on scaling infrastructure and talent recruitment globally, despite challenges in operational costs.

Key Takeaway: Canvas is a new feature from OpenAI that enhances the collaboration experience with ChatGPT, allowing for direct edits, targeted feedback, and a separate workspace for writing and coding projects beyond simple chat.

Key Points:

  • New Interface: Canvas enables simultaneous work and editing on a project, offering a separate workspace.
  • Collaboration Tools: Features shortcuts like adjusting length, changing reading level, adding comments, and debugging code.
  • Availability: Currently in beta for Plus, Team, and Enterprise users, with wider rollout planned.
  • Performance Improvements: Canvas is designed for better collaboration and targeted improvements based on user input.

Why This Matters: Canvas moves ChatGPT beyond conversation to a tool that directly supports creative and technical workflows, enhancing productivity and ease of use for coding and writing projects.

Key Takeaway:

Microsoft has announced a significant €4.3 billion investment over two years to expand AI and cloud infrastructure in Italy, aiming to strengthen its presence in Northern Italy and train over 1 million people in digital and AI skills by 2025.

Key Points:

  • Infrastructure Expansion: Microsoft will create one of Europe’s largest data center hubs in Northern Italy, enhancing AI and cloud services for key sectors like manufacturing, healthcare, and finance.
  • Digital Skills Training: The initiative will train more than 1 million Italians in AI fluency, technical skills, and AI business transformation to address Italy’s aging workforce and productivity challenges. Generative AI adoption could increase Italy’s GDP by €312 billion over 15 years.
  • Sustainability Focus: Microsoft’s Italian data centers will leverage advanced cooling systems and renewable energy to ensure high water and energy efficiency, marking a commitment to sustainability with biofuel-powered backup generators and participation in reforestation projects in Milan.
  • Partnerships: Programs like the AI L.A.B. and partnerships with Italian organizations will drive AI innovation, particularly in small and medium-sized enterprises, enhancing productivity and global competitiveness.

Why This Matters:

This investment represents a major boost for Italy’s digital transformation, positioning the country as a key player in Europe’s AI infrastructure while addressing demographic challenges, fostering innovation, and ensuring responsible AI development.

Key Takeaway: OpenAI has raised $6.6 billion in a new funding round, bringing its valuation to $157 billion. The funds will be used to accelerate the development of advanced AI models, with the company’s long-term vision centered on artificial general intelligence (AGI). The investment, led by Thrive Capital, is critical for the company’s mission but contingent on OpenAI’s potential restructuring into a for-profit entity.

Key Points:

  1. Historic Funding Round:
    • OpenAI closed a $6.6 billion funding round, valuing the company at $157 billion. This funding is essential for supporting its ongoing AI model developments and expanding its technological capabilities.
    • Thrive Capital led the round with a $1 billion commitment and the option to invest an additional $1 billion next year if OpenAI reaches revenue milestones.
  2. Potential Restructuring:
    • OpenAI’s current for-profit wing is overseen by a nonprofit body, and investor profits are capped at 100x. If OpenAI does not restructure itself into a public benefit corporation within two years, investors may demand their funds back.
    • The restructuring could be necessary for the company to fulfill the terms of the funding, moving OpenAI closer to a fully for-profit model.
  3. Revenue Projections and Hype:
    • OpenAI’s monthly revenue reached $300 million in August 2024, with projected annual sales of $3.7 billion this year and expectations of $11.6 billion next year.
    • The company is currently valued at 40 times its reported revenue, reflecting the high level of excitement and investment in AI technology.
  4. Funding Rivalry:
    • This funding round comes as competition intensifies in the AI sector, particularly with Elon Musk’s xAI, which raised $6 billion earlier in 2024.
    • OpenAI has asked its investors to avoid funding rival AI startups like Anthropic and xAI.
  5. Expensive AI Model Development:
    • The funds will be used for the costly task of training frontier AI models, which require enormous computing power. Industry leaders estimate that future AI models could cost up to $100 billion to train, highlighting the need for large-scale financial backing.

Why This Matters: OpenAI’s latest funding round highlights the enormous costs associated with training advanced AI models and the industry’s belief in AI’s transformative potential. With a $157 billion valuation, the company is in a strong position to lead in AI development, but its future hinges on balancing investor interests, potential restructuring, and continued innovation in the face of growing competition.

Oracle recently announced its plans to significantly increase investment in Malaysia, committing to expanding its cloud computing and AI infrastructure in the region. This is part of Oracle’s broader initiative to support AI-driven solutions and the digital transformation of enterprises globally.

Key Points:

  • Investment Focus: Oracle’s push in Malaysia will see a substantial investment aimed at building a more robust cloud and AI infrastructure to support businesses in the region. This includes the establishment of new data centers and AI solutions to accelerate digital transformation.
  • Cloud Computing Expansion: Oracle aims to help Malaysian companies leverage cloud technology for data processing, storage, and AI capabilities, providing enhanced scalability and flexibility for various industries.
  • Regional Impact: The investment aligns with Oracle’s strategy to strengthen its position in Southeast Asia. This move also reflects the increasing demand for cloud computing and AI tools in the region, which is driving growth in sectors like financial services, telecommunications, and government.
  • Broader AI Push: Oracle’s investments in AI will also focus on improving AI-driven services, enabling businesses to utilize AI for operational efficiency, data analytics, and innovation in digital services.

Why This Matters:

This investment signifies Oracle’s commitment to accelerating digital growth in Malaysia, leveraging AI and cloud technology. It highlights the global trend of tech companies investing in AI and cloud infrastructure to cater to the rising demand for digital transformation solutions across various sectors. Oracle’s involvement will help businesses scale AI applications, improve efficiency, and foster innovation, ultimately contributing to the region’s economic growth.

Nvidia has unveiled its groundbreaking open-source AI model, NVLM 1.0, which directly challenges GPT-4 and Google’s AI models. This model supports multimodal tasks, blending vision and language, and aims to advance conversational AI, NLP, and other AI-driven capabilities. Nvidia’s focus on openness and scalability positions NVLM 1.0 as a major contender in the AI landscape. The model promises to handle more complex tasks, setting the stage for the next generation of large-scale AI applications.

OpenAI recently introduced four major AI features, designed to enhance the capabilities available to developers and businesses:

  1. Vision Fine-Tuning: Developers can now fine-tune GPT-4o models with images, enabling applications such as improved object detection, medical image analysis, and enhanced visual search functionalities. By allowing the integration of both text and visual inputs, this feature unlocks new possibilities for AI-driven tasks that involve complex image processing.
  2. Realtime API: This new API focuses on accelerating speech-to-speech applications, making voice-based AI interactions more natural and responsive. By eliminating the need for multiple models, it improves performance and reduces latency, which is crucial for conversational AI, voice assistants, and interactive applications.
  3. Model Distillation: This feature allows developers to train smaller models by using the outputs of larger models, significantly reducing the computational cost. It brings advanced AI capabilities to companies with limited resources, making high-quality AI more accessible.
  4. Prompt Caching: To cut down API usage costs, OpenAI introduced prompt caching. This feature stores commonly used prompts for up to an hour, applying a 50% discount on repeated inputs. It is particularly useful for applications with long or repetitive prompts, offering significant savings for developers.

These new features aim to improve efficiency, performance, and accessibility across various industries, enabling a broader range of use cases with AI-driven tools.

Microsoft is enhancing its AI capabilities with the introduction of a new AI companion designed to assist both individuals and businesses in their everyday tasks. This AI companion, embedded in Microsoft Copilot, is accessible across various devices and platforms, making AI support more widespread and easier to use. This initiative aligns with Microsoft’s broader vision of democratizing AI to empower people globally, turning AI into a more personal and ubiquitous tool.

The AI Companion integrates into Microsoft 365 apps and popular platforms like Word, Excel, Outlook, and PowerPoint, offering advanced assistance in writing, designing, coding, and managing tasks. It responds to natural language commands, allowing users to delegate routine tasks, such as drafting emails, creating presentations, or summarizing meetings, which improves productivity and creativity.

Key features of the companion include:

  1. Customization: Users can create Copilot GPTs, AI systems tailored to specific tasks such as fitness or travel planning, enhancing personalization and task automation.
  2. Image Creation: With the Designer tool, users can edit images directly within Copilot, resizing or regenerating visuals for different formats, making it useful for both personal and business use.
  3. Enhanced Business Integration: Businesses, including large enterprises and small-to-medium-sized companies, can now access Copilot to improve work processes, data management, and productivity with natural language queries. The tool integrates into email, documents, and other data systems to streamline operations.

As Microsoft continues to roll out these AI enhancements, the Copilot Pro subscription is available for those seeking more advanced AI capabilities, such as priority access to GPT-4 Turbo and the ability to build custom AI models.

Scroll to Top