AI News February 2024: In-Depth and Concise

Welcome to The AI Track's comprehensive monthly roundup of the latest AI news!

Each month, we compile significant news, trends, and happenings in AI, providing detailed summaries with key points in bullet form for concise yet complete understanding.

A guy relaxed reading AI News February 2024 - Image generated by Midjourney for The AI Track

This page features AI News for February 2024. At the end, you will find links to our archives for previous months

AI NEWS February 2024

Tumblr and WordPress, owned by Automattic, are reportedly in talks to sell user data to AI companies like OpenAI and Midjourney for training purposes.

  • Automattic, the parent company of WordPress.com and Tumblr, is reportedly in discussions with AI companies Midjourney and OpenAI to sell training data scraped from users’ posts.
  • A report from 404 Media suggests that these deals are “imminent,” based on information from an anonymous source within Automattic.
  • The report indicates that the agreement is nearing completion, with details suggesting the inclusion of private or partner-related data, which raises privacy concerns.
  • The sale raises questions about privacy, as users are not explicitly informed about the potential sale of their data.
  • Automattic may have initially overreached by including questionable content in the data, such as private posts, deleted or suspended blogs, and explicit content from premium partner blogs.
  • There are uncertainties regarding whether the data has already been shared with the AI companies.
  • The report indicates that Automattic plans to introduce a new setting that will allow users to opt-out of data sharing with third parties, including AI companies.
  • Automattic has issued a public statement titled “Protecting User Choice,” vaguely referencing partnerships with AI companies and emphasizing the importance of user control and consent.
  • Automattic plans to launch an opt-out tool for users to block third parties, including AI companies, from training on their data.
  • The company intends to regularly update partners about users who opt out and request the removal of their content from past and future training.
  • The language used in communication suggests that Automattic will advocate for data removal from AI training, but compliance relies on the goodwill of the AI companies.
  • Automattic has faced challenges in monetizing Tumblr since acquiring it from Verizon in 2019 and has downscaled its ambitions for the platform.
  • OpenAI has been using publicly available internet content, including Tumblr posts, for developing its services.
  • Several other companies have previously struck deals with AI tool makers to provide training data, indicating a trend in the industry. Other platforms are also engaging in similar partnerships like Reddit ($60 million annual deal with Google) and Shutterstock (agreement with OpenAI to train on its photo library).

  • Microsoft and Mistral AI have formed a partnership to make Mistral AI’s artificial intelligence models accessible through Microsoft’s Azure cloud computing platform.
  • The primary objective of this partnership is to provide accessibility to Mistral AI’s AI models on Microsoft’s Azure cloud computing platform.
  • This multi-year deal echoes Microsoft’s intent to diversify its AI models portfolio beyond OpenAI, with the aim to attract more customers to its Azure cloud services..
  • Microsoft will also acquire a minority stake in Mistral AI.
  • The specifics of the investment are yet to be discussed.
  • Despite being a newcomer in the AI realm, Mistral AI, founded less than a year ago, has made headlines as an ambitious innovator and pioneer. The company’s target is to create more effective and affordable AI systems.
  • The startup gained significant investor momentum, swiftly achieving a €2 billion valuation.
  • Tech behemoths like Amazon and Google are among the top firms Mistral AI has worked with to circulate its AI models.
  • Headquartered in Paris, Mistral AI specializes in crafting both open-source and proprietary large language models (LLM), akin to those launched by OpenAI ChatGPT. These models are capable of comprehending and simulating human-like text.
  • The startup’s founders are ex-researchers from Google and Meta; Timothée Lacroix, Guillaume Lample, and Arthur Mensch.
  • Mistral AI endorses an anti-restrictive policy regarding foundation models in hopes of stimulating innovation within the AI industry.
  • Mistral AI has entered the ring with its latest offering, the Mistral Large. They proclaim this new model as an intriguing rival to industry frontrunners such as OpenAI’s GPT-4 and Google’s Gemini Pro.
  • Mistral Large and Mistral Small, offering improved latency, will be available through Mistral’s infrastructure or Azure AI Studio and Azure Machine Learning.
  • Unlike its predecessors, Mistral Large will not be open source, symbolizing a shift towards more commercial endeavors under the Microsoft partnership.
  • Additionally, Mistral is releasing a new conversational chatbot named Le Chat.
  • The partnership enables Mistral to explore more commercial opportunities, moving away from its typical open-source model.
  • Microsoft’s investment in Mistral AI comes amidst regulatory scrutiny in Europe and the U.S. regarding its significant funding in OpenAI, highlighting the competitive dynamics within the AI market.

This partnership marks Microsoft’s strategic move to diversify its AI ecosystem and reduce reliance on a single provider like OpenAI

  • The model is designed for complex multilingual reasoning tasks, and handles code creation, transformation, and reading comprehension with ease.
  • Mistral AI’ has also introduced it’s take on Chat-GPT, Le Chat.
  • A paid API and usage-based pricing ensure access to Mistral Large, costing $24 for every million output tokens and $8 for every million input tokens.
  • Apart from English, several European languages including French, German, Spanish, and Italian are supported by it.
  • Additionally, a partnership with Microsoft renders Mistral Large accessible on the Azure AI platform.
  • Reason for Pause: Google identified issues with Gemini generating inaccurate images of people, such as portraying historical figures of different races or genders than their actual identities.
  • Focus on Improvement: Google is actively working on addressing these issues and intends to release an improved version of Gemini with enhanced accuracy and responsible representation.
  • Functionality Impacted: The pause specifically applies to the image generation feature, while other functionalities of Gemini remain operational.
  • Limited Timeframe: The announcement suggests a temporary suspension, implying that the image generation feature will resume after improvements are implemented.
  • Ethical Considerations: This event underscores the importance of responsible development and deployment of AI models, particularly regarding potential biases and the need for accurate representation.

Overall, Google’s decision to pause the image generation feature of Gemini demonstrates their commitment to addressing ethical concerns and ensuring responsible use of their AI technology. However, it remains to be seen how effectively they can address these issues and how long it will take to reintroduce the functionality with the necessary improvements.

  • Expanding Gemini Accessibility: Google is extending Gemini’s reach beyond its initial research phase, offering access to paying Google One users and integrating it with Workspace and Chrome browser for broader utilization.
  • Chrome Integration: An experimental AI writing feature within Chrome allows users to receive suggestions and assistance while composing text on any website.
  • Workspace Integration: Gemini is now available to Google Workspace customers, enabling them to leverage its capabilities for enhanced writing and content creation within the Workspace suite.
  • Limited Functionality: The current features of Gemini primarily focus on creative writing assistance and content generation, with further development needed to encompass broader applications.

Overall, Google’s efforts highlight the potential of large language models like Gemini to revolutionize how we interact with technology and generate creative content.

Microsoft and Intel have announced a strategic partnership to co-develop new chips using TSMC’s foundry, aiming to bolster their competitiveness in the semiconductor market against industry leaders like NVIDIA and AMD.

  • Partnership Focus: Microsoft and Intel will collaborate on designing and developing new chips, leveraging Intel’s x86 architecture and expertise, and potentially incorporating Microsoft’s AI and software capabilities.
  • TSMC Foundry: Intel anticipates surpassing TSMC in producing the fastest chips this year, fueled by its “Intel 18A” manufacturing technology and aiming to extend this lead further with “Intel 14A” in 2026.
  • Surpassing TSMC: Intel expresses confidence in exceeding its initial goal of overtaking TSMC in chip performance by 2025, aiming to achieve this feat within 2024.
  • “Intel 18A” Technology: This new technology is the driving force behind Intel’s projected lead, promising advancements in chip performance and efficiency.
  • “Intel 14A” Technology: Unveiled alongside “18A,” this future technology suggests Intel’s commitment to maintaining its lead beyond 2024.
  • Microsoft as Foundry Customer: Microsoft’s partnership with Intel for chip manufacturing signifies growing industry confidence in Intel’s capabilities.
  • Competitive Landscape: This collaboration aims to address the dominance of NVIDIA and AMD in specific sectors like data center GPUs and AI accelerators.
  • Increased Foundry Revenue: Intel anticipates exceeding its previous revenue target of $10 billion from foundry services, reaching $15 billion due to growing demand.
  • Industry Implications: The collaboration has the potential to reshape the semiconductor landscape, potentially leading to more competitive chip offerings and advancements in various technological sectors.

Overall, Intel’s announcement marks a significant shift in the chip manufacturing landscape, showcasing their ambition to reclaim the top spot from TSMC. However, the full scope and impact of this collaboration remain to be seen due to the limited information available.

  • Improved Architecture: SD3 leverages a novel “diffusion transformer” architecture, aiming to surpass previous versions in image quality and detail.
  • Open-weight and source-available: Similar to prior models, Stable Diffusion 3 remains open-weight, allowing for customization and local execution.
  • Improved quality and accuracy: The announcement suggests advancements in image quality and detail generation compared to previous Stable Diffusion models.
  • Wider Range of Devices: Unlike competitors’ offerings, SD3 comes in various sizes, enabling it to run on a broader spectrum of devices, from powerful workstations to personal computers.
  • Potential for Video Generation: While details are limited, SD3 hints at the potential for video generation, similar to recent advancements from OpenAI.
  • Enhanced Efficiency: SD3 utilizes “flow matching” to improve image quality while maintaining computational efficiency, potentially reducing costs associated with training and generating images.

Overall, Stable Diffusion 3 signifies Stability AI’s continued efforts to push the boundaries of text-to-image generation technology. It offers a potentially powerful and accessible tool for various applications, but further information is needed to fully understand its functionalities and impact.

  • Google is expanding its partnership with Reddit.
  • This will allow Google to train its AI models on Reddit’s data.
  • Google will use a new data API to access and understand Reddit content.
  • Reddit will gain access to Google’s AI services.
  • Google will use artificial intelligence to enhance Reddit’s search function.
  • Google will also be able to display Reddit content more easily in its products.
  • Google and Reddit will work together to improve user experiences.
  • This partnership will not affect how Google uses publicly available information.
  • The deal comes as Reddit is preparing for its IPO.

  • This funding round marks the largest single financing round for a Chinese large language model (LLM) startup to date.
  • Moonshot AI, founded in March 2023, focuses on developing generative artificial intelligence technologies, such as its Kimi chatbot and a platform for developers to build AI applications.
  • The investment from Alibaba, along with other backers like Monolith Management and HongShan (formerly Sequoia China), underscores the growing interest in AI technologies in China and the competitive landscape within the industry.

Google AI introduces Gemma, a new family of lightweight, state-of-the-art open models built with safety and responsible use in mind.

  • Gemma models are built from the same research and technology used to create the Gemini models.
  • Gemma is smaller than previous models and can run on a laptop or desktop.
  • They are available in two sizes, 2B and 7B, and come with pre-trained and instruction-tuned variants.
  • A Responsible Generative AI Toolkit is released alongside Gemma to aid in creating safer AI applications.
  • Google hopes developers will use Gemma and stay within their ecosystem.

  • Groq’s LPUs are designed specifically for running large language models (LLMs) and claim to be much faster than Nvidia’s GPUs, the current industry standard.
  • Groq is an “inference engine” that helps existing chatbots like ChatGPT run faster, not a replacement for them.
  • Early tests show Groq’s LPUs can generate responses at significantly higher speeds compared to other options.
  • This increased speed could eliminate delays in AI chatbot conversations, making them feel more natural and real-time.
  • Groq’s founder, Jonathan Ross, has a history of developing cutting-edge AI chips at Google.
  • While Groq’s technology is promising, it remains to be seen if it can achieve the same scalability as established players like Nvidia and Google.
  • The potential of Groq’s chips has sparked interest in the AI community, including OpenAI CEO Sam Altman.

Overall, Groq’s LPU technology has the potential to revolutionize the way we interact with AI chatbots by enabling faster, more natural conversations. However, it’s still early days, and the company needs to prove its scalability and compete with established players in the market.

  • 20 leading technology companies, including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, have pledged to combat deceptive use of AI in the 2024 elections.
  • The accord, announced at the Munich Security Conference, aims to prevent harmful AI content from interfering with elections worldwide.
  • Signatories commit to collaborative efforts to develop technology to detect and address deceptive AI content, as well as to drive educational campaigns and promote transparency.
  • The accord covers AI-generated audio, video, and images that deceive voters or provide false information about elections.
  • Participating companies agree to specific commitments, including developing technology, assessing risks, detecting and addressing deceptive content, fostering resilience, providing transparency, and engaging with civil society organizations and academics.
  • The accord is seen as a significant step in safeguarding online communities and advancing election integrity.

The V-JEPA model represents a significant advancement in machine intelligence, aiming to emulate human-like learning processes by forming internal models of the world.

OpenAI has unveiled Sora, a tool capable of generating realistic videos up to a minute long from text prompts, marking a significant advancement in AI-driven video creation.

Gemini 1.5, Google’s next-generation AI model, introduces dramatic improvements in performance and long-context understanding across various modalities, showcasing advancements in AI technology.

ChatGPT is introducing a memory feature that allows users to remember specific details from conversations, enhancing future interactions and personalization.

Nvidia has released Chat with RTX, an AI chatbot demo app for its GPUs, enabling users to run a personal AI chatbot locally on their Windows PC, facilitating tasks like summarizing documents and analyzing videos.

Nvidia briefly surpassed Amazon in market value, fueled by AI computing chips’ high demand.

Nvidia’s market value reached about $1.78 trillion, while Amazon’s closed at $1.79 trillion, with Nvidia temporarily becoming the fourth most valuable US-listed company.

Google has rebranded its AI chatbot Bard to Gemini, offering a fresh app rollout and new subscription options, highlighting the company’s commitment to AI assistants.

Meta, the owner of Instagram and Facebook, will start labeling images created with leading artificial intelligence tools in the coming months, aiming to address concerns about the potential for AI-generated content to mislead users.

Apple’s released MGIE a revolutionary AI model for instruction-based image editing, leveraging multimodal large language models (MLLMs) to interpret user commands and perform pixel-level manipulations, showcasing the potential of using MLLMs to enhance image editing tasks.

  • Rufus is still in beta, but it is being rolled out to more and more customers.
  • Rufus can help customers with a variety of tasks, such as finding products, comparing prices, and making purchasing decisions.
  • Rufus is powered by Amazon’s natural language processing technology, which allows it to understand and respond to customer queries in a natural way.

Scroll to Top